876 resultados para text analytics
Resumo:
Objective Melanoma is on the rise, especially in Caucasian populations exposed to high ultraviolet radiation such as in Australia. This paper examined the psychological components facilitating change in skin cancer prevention or early detection behaviours following a text message intervention. Methods The Queensland-based participants were 18 to 42 years old, from the Healthy Text study (N = 546). Overall, 512 (94%) participants completed the 12-month follow-up questionnaires. Following the social cognitive model, potential mediators of skin self-examination (SSE) and sun protection behaviour change were examined using stepwise logistic regression models. Results At 12-month follow-up, odds of performing an SSE in the past 12 months were mediated by baseline confidence in finding time to check skin (an outcome expectation), with a change in odds ratio of 11.9% in the SSE group versus the control group when including the mediator. Odds of greater than average sun protective habits index at 12-month follow-up were mediated by (a) an attempt to get a suntan at baseline (an outcome expectation) and (b) baseline sun protective habits index, with a change in odds ratio of 10.0% and 11.8%, respectively in the SSE group versus the control group. Conclusions Few of the suspected mediation pathways were confirmed with the exception of outcome expectations and past behaviours. Future intervention programmes could use alternative theoretical models to elucidate how improvements in health behaviours can optimally be facilitated.
Resumo:
An ongoing challenge for Learning Analytics research has been the scalable derivation of user interaction data from multiple technologies. The complexities associated with this challenge are increasing as educators embrace an ever growing number of social and content related technologies. The Experience API (xAPI) alongside the development of user specific record stores has been touted as a means to address this challenge, but a number of subtle considerations must be made when using xAPI in Learning Analytics. This paper provides a general overview to the complexities and challenges of using xAPI in a general systemic analytics solution - called the Connected Learning Analytics (CLA) toolkit. The importance of design is emphasised, as is the notion of common vocabularies and xAPI Recipes. Early decisions about vocabularies and structural relationships between statements can serve to either facilitate or handicap later analytics solutions. The CLA toolkit case study provides us with a way of examining both the strengths and the weaknesses of the current xAPI specification, and we conclude with a proposal for how xAPI might be improved by using JSON-LD to formalise Recipes in a machine readable form.
Resumo:
This demonstration introduces the Connected Learning Analytics (CLA) Toolkit. The CLA toolkit harvests data about student participation in specified learning activities across standard social media environments, and presents information about the nature and quality of the learning interactions.
Resumo:
In this paper, we present the results of an exploratory study that examined the problem of automating content analysis of student online discussion transcripts. We looked at the problem of coding discussion transcripts for the levels of cognitive presence, one of the three main constructs in the Community of Inquiry (CoI) model of distance education. Using Coh-Metrix and LIWC features, together with a set of custom features developed to capture discussion context, we developed a random forest classification system that achieved 70.3% classification accuracy and 0.63 Cohen's kappa, which is significantly higher than values reported in the previous studies. Besides improvement in classification accuracy, the developed system is also less sensitive to overfitting as it uses only 205 classification features, which is around 100 times less features than in similar systems based on bag-of-words features. We also provide an overview of the classification features most indicative of the different phases of cognitive presence that gives an additional insights into the nature of cognitive presence learning cycle. Overall, our results show great potential of the proposed approach, with an added benefit of providing further characterization of the cognitive presence coding scheme.
Resumo:
XML documents are becoming more and more common in various environments. In particular, enterprise-scale document management is commonly centred around XML, and desktop applications as well as online document collections are soon to follow. The growing number of XML documents increases the importance of appropriate indexing methods and search tools in keeping the information accessible. Therefore, we focus on content that is stored in XML format as we develop such indexing methods. Because XML is used for different kinds of content ranging all the way from records of data fields to narrative full-texts, the methods for Information Retrieval are facing a new challenge in identifying which content is subject to data queries and which should be indexed for full-text search. In response to this challenge, we analyse the relation of character content and XML tags in XML documents in order to separate the full-text from data. As a result, we are able to both reduce the size of the index by 5-6\% and improve the retrieval precision as we select the XML fragments to be indexed. Besides being challenging, XML comes with many unexplored opportunities which are not paid much attention in the literature. For example, authors often tag the content they want to emphasise by using a typeface that stands out. The tagged content constitutes phrases that are descriptive of the content and useful for full-text search. They are simple to detect in XML documents, but also possible to confuse with other inline-level text. Nonetheless, the search results seem to improve when the detected phrases are given additional weight in the index. Similar improvements are reported when related content is associated with the indexed full-text including titles, captions, and references. Experimental results show that for certain types of document collections, at least, the proposed methods help us find the relevant answers. Even when we know nothing about the document structure but the XML syntax, we are able to take advantage of the XML structure when the content is indexed for full-text search.
Resumo:
In competitive combat sporting environments like boxing, the statistics on a boxer's performance, including the amount and type of punches thrown, provide a valuable source of data and feedback which is routinely used for coaching and performance improvement purposes. This paper presents a robust framework for the automatic classification of a boxer's punches. Overhead depth imagery is employed to alleviate challenges associated with occlusions, and robust body-part tracking is developed for the noisy time-of-flight sensors. Punch recognition is addressed through both a multi-class SVM and Random Forest classifiers. A coarse-to-fine hierarchical SVM classifier is presented based on prior knowledge of boxing punches. This framework has been applied to shadow boxing image sequences taken at the Australian Institute of Sport with 8 elite boxers. Results demonstrate the effectiveness of the proposed approach, with the hierarchical SVM classifier yielding a 96% accuracy, signifying its suitability for analysing athletes punches in boxing bouts.
Resumo:
This paper presents 'vSpeak', the first initiative taken in Pakistan for ICT enabled conversion of dynamic Sign Urdu gestures into natural language sentences. To realize this, vSpeak has adopted a novel approach for feature extraction using edge detection and image compression which gives input to the Artificial Neural Network that recognizes the gesture. This technique caters for the blurred images as well. The training and testing is currently being performed on a dataset of 200 patterns of 20 words from Sign Urdu with target accuracy of 90% and above.
Resumo:
High quality of platelet analytics requires specialized knowledge and skills. It was applied to analyze platelet activation and aggregation responses in a prospective controlled study of patients with Finnish type of amyloidosis. The 20 patients with AGel amyloidosis displayed a delayed and more profound platelet shape change than healthy siblings and healthy volunteers, which may be related to altered fragmentation of mutated gelsolin during platelet activation. Alterations in platelet shape change have not been reported in association with platelet disorders. In the rare Bernard-Soulier syndrome with Asn45Ser mutation of glycoprotein (GP) IX, the diagnostic defect in the expression of GPIb-IX-V complex was characterized in seven Finnish patients, also an internationally exceptionally large patient series. When measuring thrombopoietin in serial samples of amniotic fluid and cord blood of 15 pregnant women with confirmed or suspected fetal alloimmune thrombocytopenia, the lower limit of detection could be extended. The results approved that thrombopoietin is present already in amniotic fluid. The application of various non-invasive means for diagnosing thrombocytopenia (TP) revealed that techniques for estimating the proportion of young, i.e. large platelets, such as direct measurement of reticulated platelets and the mean platelet size, would be useful for evaluating platelet kinetics in a given patient. Due to different kinetics between thrombopoietin and increase of young platelets in circulation, these measurements may have most predictive value when measured from simultaneous samples. Platelet autoantibodies were present not only in isolated autoimmune TP but also in patients without TP where disappearance of platelets might be compensated by increased production. The autoantibodies may also persist after TP has been cured. Simultaneous demonstration of increased young platelets (or increased mean platelet volume) in peripheral blood and the presence of platelet associated IgG specificities to major glycoproteins (GPIb-IX and GPIIb-IIIa) may be considered diagnostic for autoimmune TP. Measurement of a soluble marker as a sign of thrombin activation and proceeding deterioration of platelet components was applied to analyze the alterations under several stress factors (storage, transportation and lack of continuous shaking under controlled conditions) of platelet products. The GPV measured as a soluble factor in platelet storage medium showed good correlation with an array of other measurements commonly applied in characterization of stored platelets. The benefits of measuring soluble analyte in a quantitative assay were evident.
Resumo:
Since 2007, close collaboration between the Learning and Teaching Unit’s Academic Quality and Standards team and the Department of Reporting and Analysis’ Business Objects team resulted in a generational approach to reporting where QUT established a place of trust. This place of trust is where data owners are confident in date storage, data integrity, reported and shared. While the role of the Department of Reporting and Analysis focused on the data warehouse, data security and publication of reports, the Academic Quality and Standards team focused on the application of learning analytics to solve academic research questions and improve student learning. Addressing questions such as: • Are all students who leave course ABC academically challenged? • Do the students who leave course XYZ stay within the faculty, university or leave? • When students withdraw from a unit do they stay enrolled on full or part load or leave? • If students enter through a particular pathway, what is their experience in comparison to other pathways? • With five years historic reporting, can a two-year predictive forecast provide any insight? In answering these questions, the Academic Quality and Standards team then developed prototype data visualisation through curriculum conversations with academic staff. Where these enquiries were applicable more broadly this information would be brought into the standardised reporting for the benefit of the whole institution. At QUT an annual report to the executive committees allows all stakeholders to record the performance and outcomes of all courses in a snapshot in time or use this live report at any point during the year. This approach to learning analytics was awarded the Awarded 2014 ATEM/Campus Review Best Practice Awards in Tertiary Education Management for The Unipromo Award for Excellence in Information Technology Management.
Resumo:
This paper describes an approach based on Zernike moments and Delaunay triangulation for localization of hand-written text in machine printed text documents. The Zernike moments of the image are first evaluated and we classify the text as hand-written using the nearest neighbor classifier. These features are independent of size, slant, orientation, translation and other variations in handwritten text. We then use Delaunay triangulation to reclassify the misclassified text regions. When imposing Delaunay triangulation on the centroid points of the connected components, we extract features based on the triangles and reclassify the text. We remove the noise components in the document as part of the preprocessing step so this method works well on noisy documents. The success rate of the method is found to be 86%. Also for specific hand-written elements such as signatures or similar text the accuracy is found to be even higher at 93%.
Resumo:
This paper presents an overview of the 6th ALTA shared task that ran in 2015. The task was to identify in English texts all the potential cognates from the perspective of the French language. In other words, identify all the words in the English text that would acceptably translate into a similar word in French. We present the motivations for the task, the description of the data and the results of the 4 participating teams. We discuss the results against a baseline and prior work.
Kansanterveysongelman synty : Tuberkuloosi ja terveyden hallinta Suomessa ennen toista maailmansotaa
Resumo:
The study focuses on the emergence of tuberculosis as a public health problem and the development of the various methods to counteract it in Finland before the introduction of efficient methods of treatment in the 1940s and 50s. It covers a time period from year 1882 when the tuberculosis bacterium was identified to the 1930s when the early formation of tuberculosis work became established in Finland. During this time there occurred important changes in medicine, public health thinking and methods of personal health care that have been referred to as the bacteriological revolution. The study places tuberculosis prevention in this context and shows how the tuberculosis problem affected the government of health on all these three dimensions. The study is based on foucauldian analytics of government, which is supplemented with perspectives from contemporary science and technology studies. In addition, it utilises a broad array of work in medical history. The central research materials consist of medical journals, official programs and documents on tuberculosis policy, and health education texts. The general conclusions of the study are twofold. Firstly, the ensemble of tuberculosis work was formed from historically diverse and often conflicting elements. The identification of the pathogen was only the first step in the establishment of tuberculosis as a major public health problem. Important were also the attention of the science of hygiene and statistical reasoning that dominated public health thinking in the late 19th century. Furthermore, the adoption of the bacteriological tuberculosis doctrine in medicine, public health work and health education was profoundly influenced by previous understanding of the nature of the illness, of medical work, of the prevention of contagious diseases, and of personal health care. Also the two central institutions of tuberculosis work, sanatorium and dispensary, have heterogeneous origins and multifarious functions. Secondly, bacteriology represented in this study by tuberculosis remodelled medical knowledge and practices, the targets and methods of public health policy, and the doctrine of personal health care. Tuberculosis provided a strong argument for specific causes (if not cures) as well as laboratory methods in medicine. Tuberculosis prevention contributed substantially to the development whereby a comprehensive responsibility for the health of the population and public health work was added to the agenda of the state. Health advice on tuberculosis and other contagious diseases used dangerous bacteria to motivate personal health care and redefined it as protecting oneself from the attacks of external pathogens and strengthening oneself against their effects. Thus, tuberculosis work is one important root for the contemporary public concern for the health of the population and the imperative of personal health care.
Resumo:
This thesis describes current and past n-in-one methods and presents three early experimental studies using mass spectrometry and the triple quadrupole instrument on the application of n-in-one in drug discovery. N-in-one strategy pools and mix samples in drug discovery prior to measurement or analysis. This allows the most promising compounds to be rapidly identified and then analysed. Nowadays properties of drugs are characterised earlier and in parallel with pharmacological efficacy. Studies presented here use in vitro methods as caco-2 cells and immobilized artificial membrane chromatography for drug absorption and lipophilicity measurements. The high sensitivity and selectivity of liquid chromatography mass spectrometry are especially important for new analytical methods using n-in-one. In the first study, the fragmentation patterns of ten nitrophenoxy benzoate compounds, serial homology, were characterised and the presence of the compounds was determined in a combinatorial library. The influence of one or two nitro substituents and the alkyl chain length of methyl to pentyl on collision-induced fragmentation was studied, and interesting structurefragmentation relationships were detected. Two nitro group compounds increased fragmentation compared to one nitro group, whereas less fragmentation was noted in molecules with a longer alkyl chain. The most abundant product ions were nitrophenoxy ions, which were also tested in the precursor ion screening of the combinatorial library. In the second study, the immobilized artificial membrane chromatographic method was transferred from ultraviolet detection to mass spectrometric analysis and a new method was developed. Mass spectra were scanned and the chromatographic retention of compounds was analysed using extract ion chromatograms. When changing detectors and buffers and including n-in-one in the method, the results showed good correlation. Finally, the results demonstrated that mass spectrometric detection with gradient elution can provide a rapid and convenient n-in-one method for ranking the lipophilic properties of several structurally diverse compounds simultaneously. In the final study, a new method was developed for caco-2 samples. Compounds were separated by liquid chromatography and quantified by selected reaction monitoring using mass spectrometry. This method was used for caco-2 samples, where absorption of ten chemically and physiologically different compounds was screened using both single and nin- one approaches. These three studies used mass spectrometry for compound identification, method transfer and quantitation in the area of mixture analysis. Different mass spectrometric scanning modes for the triple quadrupole instrument were used in each method. Early drug discovery with n-in-one is area where mass spectrometric analysis, its possibilities and proper use, is especially important.