250 resultados para Acoustic data analysis
Resumo:
Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.
Resumo:
A spatial process observed over a lattice or a set of irregular regions is usually modeled using a conditionally autoregressive (CAR) model. The neighborhoods within a CAR model are generally formed deterministically using the inter-distances or boundaries between the regions. An extension of CAR model is proposed in this article where the selection of the neighborhood depends on unknown parameter(s). This extension is called a Stochastic Neighborhood CAR (SNCAR) model. The resulting model shows flexibility in accurately estimating covariance structures for data generated from a variety of spatial covariance models. Specific examples are illustrated using data generated from some common spatial covariance functions as well as real data concerning radioactive contamination of the soil in Switzerland after the Chernobyl accident.
Resumo:
This project recognized lack of data analysis and travel time prediction on arterials as the main gap in the current literature. For this purpose it first investigated reliability of data gathered by Bluetooth technology as a new cost effective method for data collection on arterial roads. Then by considering the similarity among varieties of daily travel time on different arterial routes, created a SARIMA model to predict future travel time values. Based on this research outcome, the created model can be applied for online short term travel time prediction in future.
Resumo:
In recent years, increasing focus has been made on making good business decisions utilizing the product of data analysis. With the advent of the Big Data phenomenon, this is even more apparent than ever before. But the question is how can organizations trust decisions made on the basis of results obtained from analysis of untrusted data? Assurances and trust that data and datasets that inform these decisions have not been tainted by outside agency. This study will propose enabling the authentication of datasets specifically by the extension of the RESTful architectural scheme to include authentication parameters while operating within a larger holistic security framework architecture or model compliant to legislation.
Resumo:
This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.
Resumo:
This thesis examined the use of acoustic sensors for monitoring avian biodiversity. Acoustic sensors have the potential to significantly increase the spatial and temporal scale of ecological observations, however acoustic recordings of the environment can be opaque and complex. This thesis developed methods for analysing large volumes of acoustic data to maximise the detection of bird species, and compared the results of acoustic sensor biodiversity surveys with traditional bird survey techniques.
Resumo:
The importance of a thorough and systematic literature review has long been recognised across academic domains as critical to the foundation of new knowledge and theory evolution. Driven by an exponentially growing body of knowledge in the IS discipline, there has been a recent influx of guidance on how to conduct a literature review. As literature reviews are emerging as a standalone research method in itself, increasingly these method focused guidelines are of great interest, receiving acceptance at top tier IS publication outlets. Nevertheless, the finer details which offer justification for the selected content, and the effective presentation of supporting data has not been widely discussed in these method papers to date. This paper addresses this gap by exploring the concept of ‘literature profiling’ while arguing that it is a key aspect of a comprehensive literature review. The study establishes the importance of profiling for managing aspects such as quality assurance, transparency and the mitigation of selection bias. And then discusses how profiling can provide a valid basis for data analysis based on the attributes of selected literature. In essence, this study has conducted an archival analysis of literature (predominately from the IS domain) to present its main argument; the value for literature profiling, with supporting exemplary illustrations.
Resumo:
Background The evaluation of the hand function is an essential element within the clinical practice. The usual assessments are focus on the ability to perform activities of daily life. The inclusion of instruments to measure kinematic variables provides a new approach to the assessment. Inertial sensors adapted to the hand could be used as a complementary instrument to the traditional assessment. Material: clinimetric assessment (Upper Limb Functional Index, Quick Dash), antrophometric variables (eight and weight), dynamometry (palm preasure) was taken. Functional analysis was made with Acceleglove system for the right hand and computer system. The glove has six acceleration sensor, one on each finger and another one on the reverse palm. Method Analytic, transversal approach. Ten healthy subject made six task on evaluation table (tripod pinch, lateral pinch and tip pinch, extension grip, spherical grip and power grip). Each task was made and measure three times, the second one was analyze for the results section. A Matlab script was created for the analysis of each movement and detection phase based on module vector. Results The module acceleration vector offers useful information of the hand function. The data analysis obtained during the performance of functional gestures allows to identify five different phases within the movement, three static phase and tow dynamic, each module vector was allied to one task. Conclusion Module vector variables could be used for the analysis of the different task made by the hand. Inertial sensor could be use as a complement for the traditional assessment system.
Resumo:
Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.
Resumo:
The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.
Resumo:
This review is focused on the impact of chemometrics for resolving data sets collected from investigations of the interactions of small molecules with biopolymers. These samples have been analyzed with various instrumental techniques, such as fluorescence, ultraviolet–visible spectroscopy, and voltammetry. The impact of two powerful and demonstrably useful multivariate methods for resolution of complex data—multivariate curve resolution–alternating least squares (MCR–ALS) and parallel factor analysis (PARAFAC)—is highlighted through analysis of applications involving the interactions of small molecules with the biopolymers, serum albumin, and deoxyribonucleic acid. The outcomes illustrated that significant information extracted by the chemometric methods was unattainable by simple, univariate data analysis. In addition, although the techniques used to collect data were confined to ultraviolet–visible spectroscopy, fluorescence spectroscopy, circular dichroism, and voltammetry, data profiles produced by other techniques may also be processed. Topics considered including binding sites and modes, cooperative and competitive small molecule binding, kinetics, and thermodynamics of ligand binding, and the folding and unfolding of biopolymers. Applications of the MCR–ALS and PARAFAC methods reviewed were primarily published between 2008 and 2013.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.
Resumo:
Acoustic recordings play an increasingly important role in monitoring terrestrial environments. However, due to rapid advances in technology, ecologists are accumulating more audio than they can listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio recordings by calculating acoustic indices. These are statistics which describe the temporal-spectral distribution of acoustic energy and reflect content of ecological interest. We combine spectral indices to produce false-color spectrogram images. These not only reveal acoustic content but also facilitate navigation. An additional analytic challenge is to find appropriate descriptors to summarize the content of 24-hour recordings, so that it becomes possible to monitor long-term changes in the acoustic environment at a single location and to compare the acoustic environments of different locations. We describe a 24-hour ‘acoustic-fingerprint’ which shows some preliminary promise.
Resumo:
Drawing on multimodal texts produced by an Indigenous school community in Australia, I apply critical race theory and multimodal analysis (Jewitt, 2011) to decolonise digital heritage practices for Indigenous students. This study focuses on the particular ways in which students’ counter-narratives about race were embedded in multimodal and digital design in the development of a digital cultural heritage (Giaccardi, 2012). Data analysis involved applying multimodal analysis to the students’ Gamis, following social semiotic categories and principles theorised by Kress and Bezemer (2008), and Jewitt (2006, 2011). This includes attending to the following semiotic elements: visual design, movement and gesture, gaze, and recorded speech, and their interrelationships. The analysis also draws on critical race theory to interpret the students’ representations of race. In particular, the multimodal texts were analysed as a site for students’ views of Indigenous oppression in relation to the colonial powers and ownership of the land in Australian history (Ladson-Billings, 2009). Pedagogies that explore counter-narratives of cultural heritage in the official curriculum can encourage students to reframe their own racial identity, while challenging dominant white, historical narratives of colonial conquest, race, and power (Gutierrez, 2008). The children’s multimodal “Gami” videos, created with the iPad application, Tellagami, enabled the students to imagine hybrid, digital social identities and perspectives of Australian history that were tied to their Indigenous cultural heritage (Kamberelis, 2001).