435 resultados para analytics
Resumo:
We present a Connected Learning Analytics (CLA) toolkit, which enables data to be extracted from social media and imported into a Learning Record Store (LRS), as defined by the new xAPI standard. Core to the toolkit is the notion of learner access to their own data. A number of implementational issues are discussed, and an ontology of xAPI verb/object/activity statements as they might be unified across 7 different social media and online environments is introduced. After considering some of the analytics that learners might be interested in discovering about their own processes (the delivery of which is prioritised for the toolkit) we propose a set of learning activities that could be easily implemented, and their data tracked by anyone using the toolkit and a LRS.
Resumo:
Reflective writing is an important learning task to help foster reflective practice, but even when assessed it is rarely analysed or critically reviewed due to its subjective and affective nature. We propose a process for capturing subjective and affective analytics based on the identification and recontextualisation of anomalous features within reflective text. We evaluate 2 human supervised trials of the process, and so demonstrate the potential for an automated Anomaly Recontextualisation process for Learning Analytics.
Resumo:
One of the main challenges in data analytics is that discovering structures and patterns in complex datasets is a computer-intensive task. Recent advances in high-performance computing provide part of the solution. Multicore systems are now more affordable and more accessible. In this paper, we investigate how this can be used to develop more advanced methods for data analytics. We focus on two specific areas: model-driven analysis and data mining using optimisation techniques.
Resumo:
Experiences showed that developing business applications that base on text analysis normally requires a lot of time and expertise in the field of computer linguistics. Several approaches of integrating text analysis systems with business applications have been proposed, but so far there has been no coordinated approach which would enable building scalable and flexible applications of text analysis in enterprise scenarios. In this paper, a service-oriented architecture for text processing applications in the business domain is introduced. It comprises various groups of processing components and knowledge resources. The architecture, created as a result of our experiences with building natural language processing applications in business scenarios, allows for the reuse of text analysis and other components, and facilitates the development of business applications. We verify our approach by showing how the proposed architecture can be applied to create a text analytics enabled business application that addresses a concrete business scenario. © 2010 IEEE.
Resumo:
How should marketing educators teach today’s technologically savvy college students the latest knowledge as well as relevant soft and hardskills for employment in a world of Web 2.0? The changing environment requires the development of innovative pedagogical approaches to enhance students’ experiential learning. Recent research has focused on the idea of implementing technology and the adoption of educational blogging in the marketing curriculum. This paper outlines a semesterlong marketing blog competition, in which students had to (1) create and maintain a marketing blog and (2) apply web analytics to analyze, manage and improve their blog performance based on key performance indicators. This article offers a detailed discussion of the design and implementation as well as the outcomes based on quantitative and qualitative student feedback.
Resumo:
Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.
Resumo:
With the ever increasing amount of eHealth data available from various eHealth systems and sources, Health Big Data Analytics promises enticing benefits such as enabling the discovery of new treatment options and improved decision making. However, concerns over the privacy of information have hindered the aggregation of this information. To address these concerns, we propose the use of Information Accountability protocols to provide patients with the ability to decide how and when their data can be shared and aggregated for use in big data research. In this paper, we discuss the issues surrounding Health Big Data Analytics and propose a consent-based model to address privacy concerns to aid in achieving the promised benefits of Big Data in eHealth.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.
Resumo:
Rapid advances in sequencing technologies (Next Generation Sequencing or NGS) have led to a vast increase in the quantity of bioinformatics data available, with this increasing scale presenting enormous challenges to researchers seeking to identify complex interactions. This paper is concerned with the domain of transcriptional regulation, and the use of visualisation to identify relationships between specific regulatory proteins (the transcription factors or TFs) and their associated target genes (TGs). We present preliminary work from an ongoing study which aims to determine the effectiveness of different visual representations and large scale displays in supporting discovery. Following an iterative process of implementation and evaluation, representations were tested by potential users in the bioinformatics domain to determine their efficacy, and to understand better the range of ad hoc practices among bioinformatics literate users. Results from two rounds of small scale user studies are considered with initial findings suggesting that bioinformaticians require richly detailed views of TF data, features to compare TF layouts between organisms quickly, and ways to keep track of interesting data points.
Resumo:
An ongoing challenge for Learning Analytics research has been the scalable derivation of user interaction data from multiple technologies. The complexities associated with this challenge are increasing as educators embrace an ever growing number of social and content related technologies. The Experience API (xAPI) alongside the development of user specific record stores has been touted as a means to address this challenge, but a number of subtle considerations must be made when using xAPI in Learning Analytics. This paper provides a general overview to the complexities and challenges of using xAPI in a general systemic analytics solution - called the Connected Learning Analytics (CLA) toolkit. The importance of design is emphasised, as is the notion of common vocabularies and xAPI Recipes. Early decisions about vocabularies and structural relationships between statements can serve to either facilitate or handicap later analytics solutions. The CLA toolkit case study provides us with a way of examining both the strengths and the weaknesses of the current xAPI specification, and we conclude with a proposal for how xAPI might be improved by using JSON-LD to formalise Recipes in a machine readable form.
Resumo:
This demonstration introduces the Connected Learning Analytics (CLA) Toolkit. The CLA toolkit harvests data about student participation in specified learning activities across standard social media environments, and presents information about the nature and quality of the learning interactions.
Resumo:
In competitive combat sporting environments like boxing, the statistics on a boxer's performance, including the amount and type of punches thrown, provide a valuable source of data and feedback which is routinely used for coaching and performance improvement purposes. This paper presents a robust framework for the automatic classification of a boxer's punches. Overhead depth imagery is employed to alleviate challenges associated with occlusions, and robust body-part tracking is developed for the noisy time-of-flight sensors. Punch recognition is addressed through both a multi-class SVM and Random Forest classifiers. A coarse-to-fine hierarchical SVM classifier is presented based on prior knowledge of boxing punches. This framework has been applied to shadow boxing image sequences taken at the Australian Institute of Sport with 8 elite boxers. Results demonstrate the effectiveness of the proposed approach, with the hierarchical SVM classifier yielding a 96% accuracy, signifying its suitability for analysing athletes punches in boxing bouts.
Resumo:
Since 2007, close collaboration between the Learning and Teaching Unit’s Academic Quality and Standards team and the Department of Reporting and Analysis’ Business Objects team resulted in a generational approach to reporting where QUT established a place of trust. This place of trust is where data owners are confident in date storage, data integrity, reported and shared. While the role of the Department of Reporting and Analysis focused on the data warehouse, data security and publication of reports, the Academic Quality and Standards team focused on the application of learning analytics to solve academic research questions and improve student learning. Addressing questions such as: • Are all students who leave course ABC academically challenged? • Do the students who leave course XYZ stay within the faculty, university or leave? • When students withdraw from a unit do they stay enrolled on full or part load or leave? • If students enter through a particular pathway, what is their experience in comparison to other pathways? • With five years historic reporting, can a two-year predictive forecast provide any insight? In answering these questions, the Academic Quality and Standards team then developed prototype data visualisation through curriculum conversations with academic staff. Where these enquiries were applicable more broadly this information would be brought into the standardised reporting for the benefit of the whole institution. At QUT an annual report to the executive committees allows all stakeholders to record the performance and outcomes of all courses in a snapshot in time or use this live report at any point during the year. This approach to learning analytics was awarded the Awarded 2014 ATEM/Campus Review Best Practice Awards in Tertiary Education Management for The Unipromo Award for Excellence in Information Technology Management.
Resumo:
This thesis describes current and past n-in-one methods and presents three early experimental studies using mass spectrometry and the triple quadrupole instrument on the application of n-in-one in drug discovery. N-in-one strategy pools and mix samples in drug discovery prior to measurement or analysis. This allows the most promising compounds to be rapidly identified and then analysed. Nowadays properties of drugs are characterised earlier and in parallel with pharmacological efficacy. Studies presented here use in vitro methods as caco-2 cells and immobilized artificial membrane chromatography for drug absorption and lipophilicity measurements. The high sensitivity and selectivity of liquid chromatography mass spectrometry are especially important for new analytical methods using n-in-one. In the first study, the fragmentation patterns of ten nitrophenoxy benzoate compounds, serial homology, were characterised and the presence of the compounds was determined in a combinatorial library. The influence of one or two nitro substituents and the alkyl chain length of methyl to pentyl on collision-induced fragmentation was studied, and interesting structurefragmentation relationships were detected. Two nitro group compounds increased fragmentation compared to one nitro group, whereas less fragmentation was noted in molecules with a longer alkyl chain. The most abundant product ions were nitrophenoxy ions, which were also tested in the precursor ion screening of the combinatorial library. In the second study, the immobilized artificial membrane chromatographic method was transferred from ultraviolet detection to mass spectrometric analysis and a new method was developed. Mass spectra were scanned and the chromatographic retention of compounds was analysed using extract ion chromatograms. When changing detectors and buffers and including n-in-one in the method, the results showed good correlation. Finally, the results demonstrated that mass spectrometric detection with gradient elution can provide a rapid and convenient n-in-one method for ranking the lipophilic properties of several structurally diverse compounds simultaneously. In the final study, a new method was developed for caco-2 samples. Compounds were separated by liquid chromatography and quantified by selected reaction monitoring using mass spectrometry. This method was used for caco-2 samples, where absorption of ten chemically and physiologically different compounds was screened using both single and nin- one approaches. These three studies used mass spectrometry for compound identification, method transfer and quantitation in the area of mixture analysis. Different mass spectrometric scanning modes for the triple quadrupole instrument were used in each method. Early drug discovery with n-in-one is area where mass spectrometric analysis, its possibilities and proper use, is especially important.