937 resultados para Text analysis
Resumo:
Background Managing large student cohorts can be a challenge for university academics, coordinating these units. Bachelor of Nursing programmes have the added challenge of managing multiple groups of students and clinical facilitators whilst completing clinical placement. Clear, time efficient and effective communication between coordinating academics and clinical facilitators is needed to ensure consistency between student and teaching groups and prompt management of emerging issues. Methods This study used a descriptive survey to explore the use of text messaging via a mobile phone, sent from coordinating academics to off-campus clinical facilitators, as an approach to providing direction and support. Results The response rate was 47.8% (n = 22). Correlations were found between the approachability of the coordinating academic and clinical facilitator perception that, a) the coordinating academic understood issues on clinical placement (r = 0.785, p < 0.001), and b) being part of the teaching team (r = 0.768, p < 0.001). Analysis of responses to qualitative questions revealed three themes: connection, approachability and collaboration. Conclusions This study demonstrates that use of regular text messages improves communication between coordinating academics and clinical facilitators. Findings suggest improved connection, approachability and collaboration between the coordinating academic and clinical facilitation staff.
Resumo:
Genomic sequences are fundamentally text documents, admitting various representations according to need and tokenization. Gene expression depends crucially on binding of enzymes to the DNA sequence at small, poorly conserved binding sites, limiting the utility of standard pattern search. However, one may exploit the regular syntactic structure of the enzyme's component proteins and the corresponding binding sites, framing the problem as one of detecting grammatically correct genomic phrases. In this paper we propose new kernels based on weighted tree structures, traversing the paths within them to capture the features which underpin the task. Experimentally, we and that these kernels provide performance comparable with state of the art approaches for this problem, while offering significant computational advantages over earlier methods. The methods proposed may be applied to a broad range of sequence or tree-structured data in molecular biology and other domains.
Resumo:
Background The purpose of this study was to estimate the incidence of fatal and non-fatal Low Speed Vehicle Run Over (LSVRO) events among children aged 0–15 years in Queensland, Australia, at a population level. Methods Fatal and non-fatal LSVRO events that occurred in children resident in Queensland over eleven calendar years (1999-2009) were identified using ICD codes, text description, word searches and medical notes clarification, obtained from five health related data bases across the continuum of care (pre-hospital to fatality). Data were manually linked. Population data provided by the Australian Bureau of Statistics were used to calculate crude incidence rates for fatal and non-fatal LSVRO events. Results There were 1611 LSVROs between 1999–2009 (IR = 16.87/100,000/annum). Incidence of non-fatal events (IR = 16.60/100,000/annum) was 61.5 times higher than fatal events (IR = 0.27/100,000/annum). LSVRO events were more common in boys (IR = 20.97/100,000/annum) than girls (IR = 12.55/100,000/annum), and among younger children aged 0–4 years (IR = 21.45/100000/annum; 39% or all events) than older children (5–9 years: IR = 16.47/100,000/annum; 10–15 years IR = 13.59/100,000/annum). A total of 896 (56.8%) children were admitted to hospital for 24 hours of more following an LSVRO event (IR = 9.38/100,000/annum). Total LSVROs increased from 1999 (IR = 14.79/100,000) to 2009 (IR = 18.56/100,000), but not significantly. Over the 11 year period, there was a slight (non –significant) increase in fatalities (IR = 0.37-0.42/100,000/annum); a significant decrease in admissions (IR = 12.39–5.36/100,000/annum), and significant increase in non-admissions (IR = 2.02-12.77/100,000/annum). Trends over time differed by age, gender and severity. Conclusion This is the most comprehensive, population-based epidemiological study on fatal and non-fatal LSVRO events to date. Results from this study indicate that LSVROs incur a substantial burden. Further research is required on the characteristics and risk factors associated with these events, in order to adequately inform injury prevention. Strategies are urgently required in order to prevent these events, especially among young children aged 0-4 years.
Resumo:
Background Maternity care reform plans have been proposed at state and national levels in Australia, but the extent to which these respond to maternity care consumers’ expressed needs is unclear. This study examines open-text survey comments to identify women’s unmet needs and priorities for maternity care. It is then considered whether these needs and priorities are addressed in current reform plans. Methods Women who had a live single or multiple birth in Queensland, Australia, in 2010 (n 3,635) were invited to complete a retrospective self-report survey. In addition to questions about clinical and interpersonal maternity care experiences from pregnancy to postpartum, women were asked an open-ended question “Is there anything else you’d like to tell us about having your baby?” This paper describes a detailed thematic analysis of open-ended responses from a random selection of 150 women (10% of 1,510 who responded to the question). Results Four broad themes emerged relevant to improving women’s experiences of maternity care: quality of care (interpersonal and technical); access to choices and involvement in decision-making; unmet information needs; and dissatisfaction with the care environment. Some of these topics are reflected in current reform goals, while others provide evidence of the need for further reforms. Conclusions The findings reinforce the importance of some existing maternity reform objectives, and describe how these might best be met. Findings affirm the importance of information provision to enable informed choices; a goal of Queensland and national reform agendas. Improvement opportunities not currently specified in reform agendas were also identified, including the quality of interpersonal relationships between women and staff, particular unmet information needs (e.g., breastfeeding), and concerns regarding the care environment (e.g., crowding and long waiting times).
Resumo:
Efficient error-Propagating Block Chaining (EPBC) is a block cipher mode intended to simultaneously provide both confidentiality and integrity protection for messages. Mitchell’s analysis pointed out a weakness in the EPBC integrity mechanism that can be used in a forgery attack. This paper identifies and corrects a flaw in Mitchell’s analysis of EPBC, and presents other attacks on the EPBC integrity mechanism.
Resumo:
Background The requirement for dual screening of titles and abstracts to select papers to examine in full text can create a huge workload, not least when the topic is complex and a broad search strategy is required, resulting in a large number of results. An automated system to reduce this burden, while still assuring high accuracy, has the potential to provide huge efficiency savings within the review process. Objectives To undertake a direct comparison of manual screening with a semi‐automated process (priority screening) using a machine classifier. The research is being carried out as part of the current update of a population‐level public health review. Methods Authors have hand selected studies for the review update, in duplicate, using the standard Cochrane Handbook methodology. A retrospective analysis, simulating a quasi‐‘active learning’ process (whereby a classifier is repeatedly trained based on ‘manually’ labelled data) will be completed, using different starting parameters. Tests will be carried out to see how far different training sets, and the size of the training set, affect the classification performance; i.e. what percentage of papers would need to be manually screened to locate 100% of those papers included as a result of the traditional manual method. Results From a search retrieval set of 9555 papers, authors excluded 9494 papers at title/abstract and 52 at full text, leaving 9 papers for inclusion in the review update. The ability of the machine classifier to reduce the percentage of papers that need to be manually screened to identify all the included studies, under different training conditions, will be reported. Conclusions The findings of this study will be presented along with an estimate of any efficiency gains for the author team if the screening process can be semi‐automated using text mining methodology, along with a discussion of the implications for text mining in screening papers within complex health reviews.
Resumo:
This article presents and evaluates a model to automatically derive word association networks from text corpora. Two aspects were evaluated: To what degree can corpus-based word association networks (CANs) approximate human word association networks with respect to (1) their ability to quantitatively predict word associations and (2) their structural network characteristics. Word association networks are the basis of the human mental lexicon. However, extracting such networks from human subjects is laborious, time consuming and thus necessarily limited in relation to the breadth of human vocabulary. Automatic derivation of word associations from text corpora would address these limitations. In both evaluations corpus-based processing provided vector representations for words. These representations were then employed to derive CANs using two measures: (1) the well known cosine metric, which is a symmetric measure, and (2) a new asymmetric measure computed from orthogonal vector projections. For both evaluations, the full set of 4068 free association networks (FANs) from the University of South Florida word association norms were used as baseline human data. Two corpus based models were benchmarked for comparison: a latent topic model and latent semantic analysis (LSA). We observed that CANs constructed using the asymmetric measure were slightly less effective than the topic model in quantitatively predicting free associates, and slightly better than LSA. The structural networks analysis revealed that CANs do approximate the FANs to an encouraging degree.
Resumo:
The world is rich with information such as signage and maps to assist humans to navigate. We present a method to extract topological spatial information from a generic bitmap floor plan and build a topometric graph that can be used by a mobile robot for tasks such as path planning and guided exploration. The algorithm first detects and extracts text in an image of the floor plan. Using the locations of the extracted text, flood fill is used to find the rooms and hallways. Doors are found by matching SURF features and these form the connections between rooms, which are the edges of the topological graph. Our system is able to automatically detect doors and differentiate between hallways and rooms, which is important for effective navigation. We show that our method can extract a topometric graph from a floor plan and is robust against ambiguous cases most commonly seen in floor plans including elevators and stairwells.
Resumo:
The feasibility of different modern analytical techniques for the mass spectrometric detection of anabolic androgenic steroids (AAS) in human urine was examined in order to enhance the prevalent analytics and to find reasonable strategies for effective sports drug testing. A comparative study of the sensitivity and specificity between gas chromatography (GC) combined with low (LRMS) and high resolution mass spectrometry (HRMS) in screening of AAS was carried out with four metabolites of methandienone. Measurements were done in selected ion monitoring mode with HRMS using a mass resolution of 5000. With HRMS the detection limits were considerably lower than with LRMS, enabling detection of steroids at low 0.2-0.5 ng/ml levels. However, also with HRMS, the biological background hampered the detection of some steroids. The applicability of liquid-phase microextraction (LPME) was studied with metabolites of fluoxymesterone, 4-chlorodehydromethyltestosterone, stanozolol and danazol. Factors affecting the extraction process were studied and a novel LPME method with in-fiber silylation was developed and validated for GC/MS analysis of the danazol metabolite. The method allowed precise, selective and sensitive analysis of the metabolite and enabled simultaneous filtration, extraction, enrichment and derivatization of the analyte from urine without any other steps in sample preparation. Liquid chromatographic/tandem mass spectrometric (LC/MS/MS) methods utilizing electrospray ionization (ESI), atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were developed and applied for detection of oxandrolone and metabolites of stanozolol and 4-chlorodehydromethyltestosterone in urine. All methods exhibited high sensitivity and specificity. ESI showed, however, the best applicability, and a LC/ESI-MS/MS method for routine screening of nine 17-alkyl-substituted AAS was thus developed enabling fast and precise measurement of all analytes with detection limits below 2 ng/ml. The potential of chemometrics to resolve complex GC/MS data was demonstrated with samples prepared for AAS screening. Acquired full scan spectral data (m/z 40-700) were processed by the OSCAR algorithm (Optimization by Stepwise Constraints of Alternating Regression). The deconvolution process was able to dig out from a GC/MS run more than the double number of components as compared with the number of visible chromatographic peaks. Severely overlapping components, as well as components hidden in the chromatographic background could be isolated successfully. All studied techniques proved to be useful analytical tools to improve detection of AAS in urine. Superiority of different procedures is, however, compound-dependent and different techniques complement each other.
Resumo:
Solid materials can exist in different physical structures without a change in chemical composition. This phenomenon, known as polymorphism, has several implications on pharmaceutical development and manufacturing. Various solid forms of a drug can possess different physical and chemical properties, which may affect processing characteristics and stability, as well as the performance of a drug in the human body. Therefore, knowledge and control of the solid forms is fundamental to maintain safety and high quality of pharmaceuticals. During manufacture, harsh conditions can give rise to unexpected solid phase transformations and therefore change the behavior of the drug. Traditionally, pharmaceutical production has relied on time-consuming off-line analysis of production batches and finished products. This has led to poor understanding of processes and drug products. Therefore, new powerful methods that enable real time monitoring of pharmaceuticals during manufacturing processes are greatly needed. The aim of this thesis was to apply spectroscopic techniques to solid phase analysis within different stages of drug development and manufacturing, and thus, provide a molecular level insight into the behavior of active pharmaceutical ingredients (APIs) during processing. Applications to polymorph screening and different unit operations were developed and studied. A new approach to dissolution testing, which involves simultaneous measurement of drug concentration in the dissolution medium and in-situ solid phase analysis of the dissolving sample, was introduced and studied. Solid phase analysis was successfully performed during different stages, enabling a molecular level insight into the occurring phenomena. Near-infrared (NIR) spectroscopy was utilized in screening of polymorphs and processing-induced transformations (PITs). Polymorph screening was also studied with NIR and Raman spectroscopy in tandem. Quantitative solid phase analysis during fluidized bed drying was performed with in-line NIR and Raman spectroscopy and partial least squares (PLS) regression, and different dehydration mechanisms were studied using in-situ spectroscopy and partial least squares discriminant analysis (PLS-DA). In-situ solid phase analysis with Raman spectroscopy during dissolution testing enabled analysis of dissolution as a whole, and provided a scientific explanation for changes in the dissolution rate. It was concluded that the methods applied and studied provide better process understanding and knowledge of the drug products, and therefore, a way to achieve better quality.
Resumo:
The study proposes a method for identifying the personal imprint of literary translators in translated works of fiction. The initial assumption was that the style of a target text is not determined solely by the literary style of the author but also by features of its translator s idiolect. A method was developed for identifying the idiolectal features of individual translators, which were then used to describe personal translation styles. The method is not restricted to a particular language pair. To test the method and to establish the nature of the proposed personal imprint empirically, extracts from four English-language literary source texts (two novels by James Joyce and two by Ernest Hemingway) were first compared with their translations into Finnish (by four different translators) in order to identify changes, or shifts, that had taken place at the formal linguistic level in the translation process. To allow individual propensities to manifest themselves, only optional shifts in which the translators had a range of choices available to them were included in the study. In the second phase, extracts by different authors rendered into Finnish by the same translator were compared in order to gauge the extent of the potential impact of the author's style on the translator's work. In-depth analysis of the types of shifts made most frequently by the individual translators revealed further intersubjective differences, and the shifts were used to construct translation profiles for each of the translators. In order to determine the potential effects of frequently occurring shifts on the target text, some central concepts of narratology were adapted and used to establish an intermediate link between microlevel choices and macrolevel effects. In this way the propensity of an individual translator to opt for certain types of shift could be linked with the overall artistic effect of the target text.