971 resultados para Data integrity.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the QUT Business School (QUTBS)– researchers across economics, finance and accounting depend on data driven research. They analyze historic and global financial data across a range of instruments to understand the relationships and effects between them as they respond to news and events in their region. Scholars and Higher Degree Research Students in turn seek out universities which offer these particular datasets to further their research. This involves downloading and manipulating large datasets, often with a focus on depth of detail, frequency and long tail historical data. This is stock exchange data and has potential commercial value therefore the license for access tends to be very expensive. This poster reports the following findings: •The library has a part to play in freeing up researchers from the burden of negotiating subscriptions, fundraising and managing the legal requirements around license and access. •The role of the library is to communicate the nature and potential of these complex resources across the university to disciplines as diverse as Mathematics, Health, Information Systems and Creative Industries. •Has demonstrated clear concrete support for research by QUT Library and built relationships into faculty. It has made data available to all researchers and attracted new HDRs. The aim is to reach the output threshold of research outputs to submit into FOR Code 1502 (Banking, Finance and Investment) for ERA 2015. •It is difficult to identify what subset of dataset will be obtained given somewhat vague price tiers. •The integrity of data is variable as it is limited by the way it is collected, this occasionally raises issues for researchers(Cook, Campbell, & Kelly, 2012) •Improved library understanding of the content of our products and the nature of financial based research is a necessary part of the service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of data collection methods selected and the integrity of the data collected are integral tot eh success of a study. This chapter focuses on data collection and study validity. After reading the chapter, readers should be able to define types of data collection methods in quantitative research; list advantages and disadvantages of each method; discuss factors related to internal and external validity; critically evaluate data collection methods and discuss the need to operationalise variables of interest for data collection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides a three-layered framework to monitor the positioning performance requirements of Real-time Relative Positioning (RRP) systems of the Cooperative Intelligent Transport Systems (C-ITS) that support Cooperative Collision Warning (CCW) applications. These applications exploit state data of surrounding vehicles obtained solely from the Global Positioning System (GPS) and Dedicated Short-Range Communications (DSRC) units without using other sensors. To this end, the paper argues the need for the GPS/DSRC-based RRP systems to have an autonomous monitoring mechanism, since the operation of CCW applications is meant to augment safety on roads. The advantages of autonomous integrity monitoring are essential and integral to any safety-of-life system. The autonomous integrity monitoring framework proposed necessitates the RRP systems to detect/predict the unavailability of their sub-systems and of the integrity monitoring module itself, and, if available, to account for effects of data link delays and breakages of DSRC links, as well as of faulty measurement sources of GPS and/or integrated augmentation positioning systems, before the information used for safety warnings/alarms becomes unavailable, unreliable, inaccurate or misleading. Hence, a monitoring framework using a tight integration and correlation approach is proposed for instantaneous reliability assessment of the RRP systems. Ultimately, using the proposed framework, the RRP systems will provide timely alerts to users when the RRP solutions cannot be trusted or used for the intended operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of reductions in cell wall lignin content, manifested by RNA interference suppression of coumaroyl 3'-hydroxylase, on plant growth, water transport, gas exchange, and photosynthesis were evaluated in hybrid poplar trees (Populus alba 3 grandidentata). The growth characteristics of the reduced lignin trees were significantly impaired, resulting in smaller stems and reduced root biomass when compared to wild-type trees, as well as altered leaf morphology and architecture. The severe inhibition of cell wall lignification produced trees with a collapsed xylem phenotype, resulting in compromised vascular integrity, and displayed reduced hydraulic conductivity and a greater susceptibility to wall failure and cavitation. In the reduced lignin trees, photosynthetic carbon assimilation and stomatal conductance were also greatly reduced, however, shoot xylem pressure potential and carbon isotope discrimination were higher and water-use efficiency was lower, inconsistent with water stress. Reductions in assimilation rate could not be ascribed to increased stomatal limitation. Starch and soluble sugars analysis of leaves revealed that photosynthate was accumulating to high levels, suggesting that the trees with substantially reduced cell wall lignin were not carbon limited and that reductions in sink strength were, instead, limiting photosynthesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first consideration of any Australian Human Research Ethics Committee should be to satisfy itself that the project before them is worth undertaking. If the project does not add to the body of knowledge, if it does not improve social welfare or individual wellbeing then the use of human participants, their tissue or their data must be questioned. Sometimes, however, committees are criticised for appearing to adopt the role of scientific review committees. The intent of this paper is to provide researchers with an understanding of the ethical importance of demonstrating the merit of their research project and to help them develop protocols that show ethics committees that adequate attention has been paid to this central tenet in dealing ethically with human research participants. Any person proposing human research must be prepared to show that it is worthwhile. This paper will clarify the relationship between research merit and integrity, research ethics and the responsibilities of human research ethics committees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a multi-criteria based approach for nondestructive diagnostic structural integrity assessment of a decommissioned flatbed rail wagon (FBRW) used for road bridge superstructure rehabilitation and replacement applications. First, full-scale vibration and static test data sets are employed in a FE model of the FBRW to obtain the best ‘initial’ estimate of the model parameters. Second, the ‘final’ model parameters are predicted using sensitivity-based perturbation analysis without significant difficulties encountered. Consequently, the updated FBRW model is validated using the independent sets of full-scale laboratory static test data. Finally, the updated and validated FE model of the FBRW is used for structural integrity assessment of a single lane FBRW bridge subjected to the Australian bridge design traffic load.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decision-making is such an integral aspect in health care routine that the ability to make the right decisions at crucial moments can lead to patient health improvements. Evidence-based practice, the paradigm used to make those informed decisions, relies on the use of current best evidence from systematic research such as randomized controlled trials. Limitations of the outcomes from randomized controlled trials (RCT), such as “quantity” and “quality” of evidence generated, has lowered healthcare professionals’ confidence in using EBP. An alternate paradigm of Practice-Based Evidence has evolved with the key being evidence drawn from practice settings. Through the use of health information technology, electronic health records (EHR) capture relevant clinical practice “evidence”. A data-driven approach is proposed to capitalize on the benefits of EHR. The issues of data privacy, security and integrity are diminished by an information accountability concept. Data warehouse architecture completes the data-driven approach by integrating health data from multi-source systems, unique within the healthcare environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose To test an interventional patient skin integrity bundle, InSPiRE protocol, on the impact of pressure injuries (PrIs) in critically ill patients in an Australian adult intensive care unit (ICU). Methods Before and after design was used where the group of patients receiving the intervention (InSPiRE protocol) was compared with a similar control group who received standard care. Data collected included demographic and clinical variables, skin assessment, PrI presence and stage, and a Sequential Organ Failure Assessment (SOFA) score. Results Overall, 207 patients were enrolled, 105 in the intervention group and 102 in the control group. Most patients were men, mean age 55. The groups were similar on major demographic variables (age, SOFA scores, ICU length of stay). Pressure injury cumulative incidence was significantly lower in the intervention group (18%) compared to the control group for skin injuries(30.4%) (χ2=4.271, df=1, p=0.039) and mucous injuries (t test =3.27, p=<0.001) . Significantly fewer PrIs developing over time in the intervention group (Logrank= 11.842, df=1, p=<0.001) and patients developed fewer skin injuries (>3 PrIs/patient = 1/105) compared with the control group (>3 injuries/patient = 10/102) (p=0.018). Conclusion The intervention group, recieving the InSPiRE protocol, had lower PrI cumulative incidence, and reduced number and severity of PrIs that developed over time. Systematic and ongoing assessment of the patient's skin and PrI risk as well as implementation of tailored prevention measures are central to preventing PrIs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Invasive non-native plants have negatively impacted on biodiversity and ecosystem functions world-wide. Because of the large number of species, their wide distributions and varying degrees of impact, we need a more effective method for prioritizing control strategies for cost-effective investment across heterogeneous landscapes. Here, we develop a prioritization framework that synthesizes scientific data, elicits knowledge from experts and stakeholders to identify control strategies, and appraises the cost-effectiveness of strategies. Our objective was to identify the most cost-effective strategies for reducing the total area dominated by high-impact non-native plants in the Lake Eyre Basin (LEB). We use a case study of the ˜120 million ha Lake Eyre Basin that comprises some of the most distinctive Australian landscapes, including Uluru-Kata Tjuta National Park. More than 240 non-native plant species are recorded in the Lake Eyre Basin, with many predicted to spread, but there are insufficient resources to control all species. Lake Eyre Basin experts identified 12 strategies to control, contain or eradicate non-native species over the next 50 years. The total cost of the proposed Lake Eyre Basin strategies was estimated at AU$1·7 billion, an average of AU$34 million annually. Implementation of these strategies is estimated to reduce non-native plant dominance by 17 million ha – there would be a 32% reduction in the likely area dominated by non-native plants within 50 years if these strategies were implemented. The three most cost-effective strategies were controlling Parkinsonia aculeata, Ziziphus mauritiana and Prosopis spp. These three strategies combined were estimated to cost only 0·01% of total cost of all the strategies, but would provide 20% of the total benefits. Over 50 years, cost-effective spending of AU$2·3 million could eradicate all non-native plant species from the only threatened ecological community within the Lake Eyre Basin, the Great Artesian Basin discharge springs. Synthesis and applications. Our framework, based on a case study of the ˜120 million ha Lake Eyre Basin in Australia, provides a rationale for financially efficient investment in non-native plant management and reveals combinations of strategies that are optimal for different budgets. It also highlights knowledge gaps and incidental findings that could improve effective management of non-native plants, for example addressing the reliability of species distribution data and prevalence of information sharing across states and regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reductions in DNA integrity, genome stability, and telomere length are strongly associated with the aging process, age-related diseases as well as the age-related loss of muscle mass. However, in people reaching an age far beyond their statistical life expectancy the prevalence of diseases, such as cancer, cardiovascular disease, diabetes or dementia, is much lower compared to “averagely” aged humans. These inverse observations in nonagenarians (90–99 years), centenarians (100–109 years) and super-centenarians (110 years and older) require a closer look into dynamics underlying DNA damage within the oldest old of our society. Available data indicate improved DNA repair and antioxidant defense mechanisms in “super old” humans, which are comparable with much younger cohorts. Partly as a result of these enhanced endogenous repair and protective mechanisms, the oldest old humans appear to cope better with risk factors for DNA damage over their lifetime compared to subjects whose lifespan coincides with the statistical life expectancy. This model is supported by study results demonstrating superior chromosomal stability, telomere dynamics and DNA integrity in “successful agers”. There is also compelling evidence suggesting that life-style related factors including regular physical activity, a well-balanced diet and minimized psycho-social stress can reduce DNA damage and improve chromosomal stability. The most conclusive picture that emerges from reviewing the literature is that reaching “super old” age appears to be primarily determined by hereditary/genetic factors, while a healthy lifestyle additionally contributes to achieving the individual maximum lifespan in humans. More research is required in this rapidly growing population of super old people. In particular, there is need for more comprehensive investigations including short- and long-term lifestyle interventions as well as investigations focusing on the mechanisms causing DNA damage, mutations, and telomere shortening.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This poster presents key features of how QUT’s integrated research data storage and management services work with researchers through their own individual or team research life cycle. By understanding the characteristics of research data, and the long-term need to store this data, QUT has provided resources and tools that support QUT’s goal of being a research intensive institute. Key to successful delivery and operation has been the focus upon researchers’ individual needs and the collaboration between providers, in particular, Information Technology Services, High Performance Computing and Research Support, and QUT Library. QUT’s Research Data Storage service provides all QUT researchers (staff and Higher Degree Research students (HDRs)) with a secure data repository throughout the research data lifecycle. Three distinct storage areas provide for raw research data to be acquired, project data to be worked on, and published data to be archived. Since the service was launched in late 2014, it has provided research project teams from all QUT faculties with acquisition, working or archival data space. Feedback indicates that the storage suits the unique needs of researchers and their data. As part of the workflow to establish storage space for researchers, Research Support Specialists and Research Data Librarians consult with researchers and HDRs to identify data storage requirements for projects and individual researchers, and to select and implement the most suitable data storage services and facilities. While research can be a journey into the unknown[1], a plan can help navigate through the uncertainty. Intertwined in the storage provision is QUT’s Research Data Management Planning tool. Launched in March 2015, it has already attracted 273 QUT staff and 352 HDR student registrations, and over 620 plans have been created (2/10/2015). Developed in collaboration with Office of Research Ethics and Integrity (OREI), uptake of the plan has exceeded expectations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate the underreporting rate of death -cause data in Shandong province during 2012 to 2013 by capture -mark -recapture method and to provide the base for health strategy. Methods All counties were divided into 5 stratifications according the death rates of 2012, and 14 counties were selected, then 3 towns or streets were selected in each country, 10 villages or neighborhood committees were selected in each town (street). The death data collected from security bureau and civil affairs bureau were compared with the reporting death data from the National Cause of Death Surveillance, and the underreporting rate was calculated. Results In present study, 6 929 death cases were collected, it was found that 1 556 cases were underreported. The death cases estimated by CMR method were 6 227 cases (95%CI: 7 593-7 651), and the average underreporting rate was 23.15%. There were significantly differences between different stratifications (P<0.01). The underreporting rate in 0-4 years old group was 56.93%, the male underreporting rate was 22.31% and the female underreporting rate was 24.09%. There was no significant difference between male and female groups (P>0.05). Conclusion There is an obvious underreport in the cause of death surveillance of Shandong province, and the underreporting rates are different among the 5 stratifications. The underreporting rate is higher in 0-4 years old group, and the investigation of the death cause surveillance for young residents is not perfect in some countries. The investigation quality of the death cause surveillance should be improved, increasing the integrity of the report data and adjusting the mortalities in different stratifications for obtaining a accurate mortality in Shandong province.