908 resultados para Data reliability


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research contributes to the field of customer equity by examining how important the strategy drivers of consumption and customer data management are in contributing to the value of the customer asset. A mixed methods approach focused on one sector: the Australian accommodation hotels. From this research, a deeper understanding of how to theorise, conceptualise and practice customer equity management has been achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road surface skid resistance has been shown to have a strong relationship to road crash risk, however, applying the current method of using investigatory levels to identify crash prone roads is problematic as they may fail in identifying risky roads outside of the norm. The proposed method analyses a complex and formerly impenetrable volume of data from roads and crashes using data mining. This method rapidly identifies roads with elevated crash-rate, potentially due to skid resistance deficit, for investigation. A hypothetical skid resistance/crash risk curve is developed for each road segment, driven by the model deployed in a novel regression tree extrapolation method. The method potentially solves the problem of missing skid resistance values which occurs during network-wide crash analysis, and allows risk assessment of the major proportion of roads without skid resistance values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) Library offers a range of resources and services to researchers as part of their research support portfolio. This poster will present key features of two of the data management services offered by research support staff at QUT Library. The first service is QUT Research Data Finder (RDF), a product of the Australian National Data Service (ANDS) funded Metadata Stores project. RDF is a data registry (metadata repository) that aims to publicise datasets that are research outputs arising from completed QUT research projects. The second is a software and code registry, which is currently under development with the sole purpose of improving discovery of source code and software as QUT research outputs. RESEARCH DATA FINDER As an integrated metadata repository, Research Data Finder aligns with institutional sources of truth, such as QUT’s research administration system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. The repository and its workflows are designed to foster better data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximise the impact of existing research data sets. SOFTWARE AND CODE REGISTRY The QUT Library software and code registry project stems from concerns amongst researchers with regards to development activities, storage, accessibility, discoverability and impact, sharing, copyright and IP ownership of software and code. As a result, the Library is developing a registry for code and software research outputs, which will use existing Research Data Finder architecture. The underpinning software for both registries is VIVO, open source software developed by Cornell University. The registry will use the Research Data Finder service instance of VIVO and will include a searchable interface, links to code/software locations and metadata feeds to Research Data Australia. Key benefits of the project include:improving the discoverability and reuse of QUT researchers’ code and software amongst QUT and the QUT research community; increasing the profile of QUT research outputs on a national level by providing a metadata feed to Research Data Australia, and; improving the metrics for access and reuse of code and software in the repository.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Children are encountering more and more graphic representations of data in their learning and everyday life. Much of this data occurs in quantitative forms as different forms of measurement are incorporated into the graphics during their construction. In their formal education, children are required to learn to use a range of these quantitative representations in subjects across the school curriculum. Previous research that focuses on the use of information processing and traditional approaches to cognitive psychology concludes that the development of an understanding of such representations of data is a complex process. An alternative approach is to investigate the experiences of children as they interact with graphic representations of quantitative data in their own life-worlds. This paper demonstrates how a phenomenographic approach may be used to reveal the qualitatively different ways in which children in Australian primary and secondary education understand the phenomenon of graphic representations of quantitative data. Seven variations of the children’s understanding were revealed. These have been described interpretively in the article and confirmed through the words of the children. A detailed outcome space demonstrates how these seven variations are structurally related.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: This study examines the accuracy of Gestational Diabetes Mellitus (GDM) case-ascertainment in routinely collected data. Methods: Retrospective cohort study analysed routinely collected data from all births at Cairns Base Hospital, Australia, from 1 January 2004 to 31 December 2010 in the Cairns Base Hospital Clinical Coding system (CBHCC) and the Queensland Perinatal Data Collection (QPDC). GDM case ascertainment in the National Diabetes Services Scheme (NDSS) and Cairns Diabetes Centre (CDC) data were compared. Results: From 2004 to 2010, the specificity of GDM case-ascertainment in the QPDC was 99%. In 2010, only 2 of 225 additional cases were identified from the CDC and CBHCC, suggesting QPDC sensitivity is also over 99%. In comparison, the sensitivity of the CBHCC data was 80% during 2004–2010. The sensitivity of CDC data was 74% in 2010. During 2010, 223 births were coded as GDM in the QPDC, and the NDSS registered 247 women with GDM from the same postcodes, suggesting reasonable uptake on the NDSS register. However, the proportion of Aboriginal and Torres Strait Islander women was lower than expected. Conclusion: The accuracy of GDM case ascertainment in the QPDC appears high, with lower accuracy in routinely collected hospital and local health service data. This limits capacity of local data for planning and evaluation, and developing structured systems to improve post-pregnancy care, and may underestimate resources required. Implications: Data linkage should be considered to improve accuracy of routinely collected local health service data. The accuracy of the NDSS for Aboriginal and Torres Strait Islander women requires further evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently there are ~3000 known species of Sarcophagidae (Diptera), which are classified into 173 genera in three subfamilies. Almost 25% of sarcophagids belong to the genus Sarcophaga (sensu lato) however little is known about the validity of, and relationships between the ~150 (or more) subgenera of Sarcophaga s.l. In this preliminary study, we evaluated the usefulness of three sources of data for resolving relationships between 35 species from 14 Sarcophaga s.l. subgenera: the mitochondrial COI barcode region, ~800. bp of the nuclear gene CAD, and 110 morphological characters. Bayesian, maximum likelihood (ML) and maximum parsimony (MP) analyses were performed on the combined dataset. Much of the tree was only supported by the Bayesian and ML analyses, with the MP tree poorly resolved. The genus Sarcophaga s.l. was resolved as monophyletic in both the Bayesian and ML analyses and strong support was obtained at the species-level. Notably, the only subgenus consistently resolved as monophyletic was Liopygia. The monophyly of and relationships between the remaining Sarcophaga s.l. subgenera sampled remain questionable. We suggest that future phylogenetic studies on the genus Sarcophaga s.l. use combined datasets for analyses. We also advocate the use of additional data and a range of inference strategies to assist with resolving relationships within Sarcophaga s.l.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data is a rising IT trend similar to cloud computing, social networking or ubiquitous computing. Big Data can offer beneficial scenarios in the e-health arena. However, one of the scenarios can be that Big Data needs to be kept secured for a long period of time in order to gain its benefits such as finding cures for infectious diseases and protecting patient privacy. From this connection, it is beneficial to analyse Big Data to make meaningful information while the data is stored securely. Therefore, the analysis of various database encryption techniques is essential. In this study, we simulated 3 types of technical environments, namely, Plain-text, Microsoft Built-in Encryption, and custom Advanced Encryption Standard, using Bucket Index in Data-as-a-Service. The results showed that custom AES-DaaS has a faster range query response time than MS built-in encryption. Furthermore, while carrying out the scalability test, we acknowledged that there are performance thresholds depending on physical IT resources. Therefore, for the purpose of efficient Big Data management in eHealth it is noteworthy to examine their scalability limits as well even if it is under a cloud computing environment. In addition, when designing an e-health database, both patient privacy and system performance needs to be dealt as top priorities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the work being conducted in the baseline rail level crossing project, supported by the Australian rail industry and the Cooperative Research Centre for Rail Innovation. The paper discusses the limitations of near-miss data for analysis obtained using current level crossing occurrence reporting practices. The project is addressing these limitations through the development of a data collection and analysis system with an underlying level crossing accident causation model. An overview of the methodology and improved data recording process are described. The paper concludes with a brief discussion of benefits this project is expected to provide the Australian rail industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research aims to develop a reliable density estimation method for signalised arterials based on cumulative counts from upstream and downstream detectors. In order to overcome counting errors associated with urban arterials with mid-link sinks and sources, CUmulative plots and Probe Integration for Travel timE estimation (CUPRITE) is employed for density estimation. The method, by utilizing probe vehicles’ samples, reduces or cancels the counting inconsistencies when vehicles’ conservation is not satisfied within a section. The method is tested in a controlled environment, and the authors demonstrate the effectiveness of CUPRITE for density estimation in a signalised section, and discuss issues associated with the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Conjunctival ultraviolet autofluorescence (UVAF) photography was developed to detect and characterise pre-clinical sunlight-induced UV damage. The reliability of this measurement and its relationship to outdoor activity are currently unknown. METHODS: 599 people aged 16-85 years in the cross-sectional Norfolk Island Eye Study were included in the validation study. 196 UVAF individual photographs (49 people) and 60 UVAF photographs (15 people) of Norfolk Island Eye Study participants were used for intra- and inter-observer reliability assessment, respectively. Conjunctival UVAF was measured using UV photography. UVAF area was calculated using computerised methods by one grader on two occasions (intra-observer analysis) or two graders (inter-observer analysis). Outdoor activity category, during summer and winter separately, was determined with a UV questionnaire. Total UVAF equalled the area measured in four conjunctival areas (nasal/temporal conjunctiva of right and left eyes). RESULTS: Intra-observer (ρ_c=0.988, 95% CI 0.967 to 0.996, p<0.001), and inter-observer concordance correlation coefficients (ρ_c=0.924, 95% CI 0.870 to 0.956, p<0.001) of total UVAF exceeded 0.900. When grouped according to 10 mm(2) total UVAF increments, intra- and inter-observer reliability was very good (κ=0.81) and good (κ=0.71), respectively. Increasing time outdoors was strongly with increasing total UVAF in summer and winter (p(trend) <0.001). CONCLUSION: Intra- and inter-observer reliability of conjunctival UVAF is high. In this population, UVAF correlates strongly with the authors' survey-based assessment of time spent outdoors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How do agents with limited cognitive capacities flourish in informationally impoverished or unexpected circumstances? Aristotle argued that human flourishing emerged from knowing about the world and our place within it. If he is right, then the virtuous processes that produce knowledge, best explain flourishing. Influenced by Aristotle, virtue epistemology defends an analysis of knowledge where beliefs are evaluated for their truth and the intellectual virtue or competences relied on in their creation. However, human flourishing may emerge from how degrees of ignorance are managed in an uncertain world. Perhaps decision-making in the shadow of knowledge best explains human wellbeing—a Bayesian approach? In this dissertation I argue that a hybrid of virtue and Bayesian epistemologies explains human flourishing—what I term homeostatic epistemology. Homeostatic epistemology supposes that an agent has a rational credence p when p is the product of reliable processes aligned with the norms of probability theory; whereas an agent knows that p when a rational credence p is the product of reliable processes such that: 1) p meets some relevant threshold for belief (such that the agent acts as though p were true and indeed p is true), 2) p coheres with a satisficing set of relevant beliefs and, 3) the relevant set of beliefs is coordinated appropriately to meet the integrated aims of the agent. Homeostatic epistemology recognizes that justificatory relationships between beliefs are constantly changing to combat uncertainties and to take advantage of predictable circumstances. Contrary to holism, justification is built up and broken down across limited sets like the anabolic and catabolic processes that maintain homeostasis in the cells, organs and systems of the body. It is the coordination of choristic sets of reliably produced beliefs that create the greatest flourishing given the limitations inherent in the situated agent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Multiple sclerosis (MS) is the most common cause of chronic neurologic disability beginning in early to middle adult life. Results from recent genome-wide association studies (GWAS) have substantially lengthened the list of disease loci and provide convincing evidence supporting a multifactorial and polygenic model of inheritance. Nevertheless, the knowledge of MS genetics remains incomplete, with many risk alleles still to be revealed. Methods: We used a discovery GWAS dataset (8,844 samples, 2,124 cases and 6,720 controls) and a multi-step logistic regression protocol to identify novel genetic associations. The emerging genetic profile included 350 independent markers and was used to calculate and estimate the cumulative genetic risk in an independent validation dataset (3,606 samples). Analysis of covariance (ANCOVA) was implemented to compare clinical characteristics of individuals with various degrees of genetic risk. Gene ontology and pathway enrichment analysis was done using the DAVID functional annotation tool, the GO Tree Machine, and the Pathway-Express profiling tool. Results: In the discovery dataset, the median cumulative genetic risk (P-Hat) was 0.903 and 0.007 in the case and control groups, respectively, together with 79.9% classification sensitivity and 95.8% specificity. The identified profile shows a significant enrichment of genes involved in the immune response, cell adhesion, cell communication/ signaling, nervous system development, and neuronal signaling, including ionotropic glutamate receptors, which have been implicated in the pathological mechanism driving neurodegeneration. In the validation dataset, the median cumulative genetic risk was 0.59 and 0.32 in the case and control groups, respectively, with classification sensitivity 62.3% and specificity 75.9%. No differences in disease progression or T2-lesion volumes were observed among four levels of predicted genetic risk groups (high, medium, low, misclassified). On the other hand, a significant difference (F = 2.75, P = 0.04) was detected for age of disease onset between the affected misclassified as controls (mean = 36 years) and the other three groups (high, 33.5 years; medium, 33.4 years; low, 33.1 years). Conclusions: The results are consistent with the polygenic model of inheritance. The cumulative genetic risk established using currently available genome-wide association data provides important insights into disease heterogeneity and completeness of current knowledge in MS genetics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a method for optical encryption of information, based on the time-dependent dynamics of writing and erasure of refractive index changes in a bulk lithium niobate medium. Information is written into the photorefractive crystal with a spatially amplitude modulated laser beam which when overexposed significantly degrades the stored data making it unrecognizable. We show that the degradation can be reversed and that a one-to-one relationship exists between the degradation and recovery rates. It is shown that this simple relationship can be used to determine the erasure time required for decrypting the scrambled index patterns. In addition, this method could be used as a straightforward general technique for determining characteristic writing and erasure rates in photorefractive media.