914 resultados para coerência textual
Resumo:
Background Prescription medicine samples provided by pharmaceutical companies are predominantly newer and more expensive products. The range of samples provided to practices may not represent the drugs that the doctors desire to have available. Few studies have used a qualitative design to explore the reasons behind sample use. Objective The aim of this study was to explore the opinions of a variety of Australian key informants about prescription medicine samples, using a qualitative methodology. Methods Twenty-three organizations involved in quality use of medicines in Australia were identified, based on the authors' previous knowledge. Each organization was invited to nominate 1 or 2 representatives to participate in semistructured interviews utilizing seeding questions. Each interview was recorded and transcribed verbatim. Leximancer v2.25 text analysis software (Leximancer Pty Ltd., Jindalee, Queensland, Australia) was used for textual analysis. The top 10 concepts from each analysis group were interrogated back to the original transcript text to determine the main emergent opinions. Results A total of 18 key interviewees representing 16 organizations participated. Samples, patient, doctor, and medicines were the major concepts among general opinions about samples. The concept drug became more frequent and the concept companies appeared when marketing issues were discussed. The Australian Pharmaceutical Benefits Scheme and cost were more prevalent in discussions about alternative sample distribution models, indicating interviewees were cognizant of budgetary implications. Key interviewee opinions added richness to the single-word concepts extracted by Leximancer. Conclusions Participants recognized that prescription medicine samples have an influence on quality use of medicines and play a role in the marketing of medicines. They also believed that alternative distribution systems for samples could provide benefits. The cost of a noncommercial system for distributing samples or starter packs was a concern. These data will be used to design further research investigating alternative models for distribution of samples.
Resumo:
Evaluation in higher education is an evolving social practice, that is, it involves what people, institutions and broader systems do and say, how they do and say it, what they value, the effects of these practices and values, and how meanings are ascribed. The textual products (verbal, written, visual, gestural) that inform and are produced by, for and through evaluative practices are important as they promulgate particular kinds of meanings and values in specific contexts. This paper reports on an exploratory study that sought to investigate, using discourse analysis, the types of evaluative practices that were ascribed value, and the student responses that ensued, in different evaluative instruments. Findings indicate that when a reflective approach is taken to evaluation, students’ responses are more considered, they interrogate their own engagement in the learning context and they are more likely to demonstrate reconstructive thought. These findings have implications for reframing evaluation as reflective learning.
Resumo:
Engineers must have deep and accurate conceptual understanding of their field and Concept inventories (CIs) are one method of assessing conceptual understanding and providing formative feedback. Current CI tests use Multiple Choice Questions (MCQ) to identify misconceptions and have undergone reliability and validity testing to assess conceptual understanding. However, they do not readily provide the diagnostic information about students’ reasoning and therefore do not effectively point to specific actions that can be taken to improve student learning. We piloted the textual component of our diagnostic CI on electrical engineering students using items from the signals and systems CI. We then analysed the textual responses using automated lexical analysis software to test the effectiveness of these types of software and interviewed the students regarding their experience using the textual component. Results from the automated text analysis revealed that students held both incorrect and correct ideas for certain conceptual areas and provided indications of student misconceptions. User feedback also revealed that the inclusion of the textual component is helpful to students in assessing and reflecting on their own understanding.
Resumo:
Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.
Resumo:
In this thesis we present and evaluate two pattern matching based methods for answer extraction in textual question answering systems. A textual question answering system is a system that seeks answers to natural language questions from unstructured text. Textual question answering systems are an important research problem because as the amount of natural language text in digital format grows all the time, the need for novel methods for pinpointing important knowledge from the vast textual databases becomes more and more urgent. We concentrate on developing methods for the automatic creation of answer extraction patterns. A new type of extraction pattern is developed also. The pattern matching based approach chosen is interesting because of its language and application independence. The answer extraction methods are developed in the framework of our own question answering system. Publicly available datasets in English are used as training and evaluation data for the methods. The techniques developed are based on the well known methods of sequence alignment and hierarchical clustering. The similarity metric used is based on edit distance. The main conclusions of the research are that answer extraction patterns consisting of the most important words of the question and of the following information extracted from the answer context: plain words, part-of-speech tags, punctuation marks and capitalization patterns, can be used in the answer extraction module of a question answering system. This type of patterns and the two new methods for generating answer extraction patterns provide average results when compared to those produced by other systems using the same dataset. However, most answer extraction methods in the question answering systems tested with the same dataset are both hand crafted and based on a system-specific and fine-grained question classification. The the new methods developed in this thesis require no manual creation of answer extraction patterns. As a source of knowledge, they require a dataset of sample questions and answers, as well as a set of text documents that contain answers to most of the questions. The question classification used in the training data is a standard one and provided already in the publicly available data.
Resumo:
Concept inventory tests are one method to evaluate conceptual understanding and identify possible misconceptions. The multiple-choice question format, offering a choice between a correct selection and common misconceptions, can provide an assessment of students' conceptual understanding in various dimensions. Misconceptions of some engineering concepts exist due to a lack of mental frameworks, or schemas, for these types of concepts or conceptual areas. This study incorporated an open textual response component in a multiple-choice concept inventory test to capture written explanations of students' selections. The study's goal was to identify, through text analysis of student responses, the types and categorizations of concepts in these explanations that had not been uncovered by the distractor selections. The analysis of the textual explanations of a subset of the discrete-time signals and systems concept inventory questions revealed that students have difficulty conceptually explaining several dimensions of signal processing. This contributed to their inability to provide a clear explanation of the underlying concepts, such as mathematical concepts. The methods used in this study evaluate students' understanding of signals and systems concepts through their ability to express understanding in written text. This may present a bias for students with strong written communication skills. This study presents a framework for extracting and identifying the types of concepts students use to express their reasoning when answering conceptual questions.
Resumo:
We address the task of mapping a given textual domain model (e.g., an industry-standard reference model) for a given domain (e.g., ERP), with the source code of an independently developed application in the same domain. This has applications in improving the understandability of an existing application, migrating it to a more flexible architecture, or integrating it with other related applications. We use the vector-space model to abstractly represent domain model elements as well as source-code artifacts. The key novelty in our approach is to leverage the relationships between source-code artifacts in a principled way to improve the mapping process. We describe experiments wherein we apply our approach to the task of matching two real, open-source applications to corresponding industry-standard domain models. We demonstrate the overall usefulness of our approach, as well as the role of our propagation techniques in improving the precision and recall of the mapping task.
Resumo:
Analisa a atuação do Partido dos Trabalhadores (PT), na Câmara dos Deputados, durante as Reformas da Previdência Social, entre os anos de 1995-98 e em 2003. Problematiza o comportamento do PT em distintos momentos históricos, considerando as posições políticas ocupadas, oposição e governo, possibilita interessante debate acerca das estratégias de articulação política, da coerência ideológica ou das contradições que o Partido demonstrou diante de uma questão complexa e conflitante como as mudanças no sistema previdenciário. Discute como um partido político com uma trajetória singular, uma história de origens de bases populares, embora de constituição bastante heterogênea, conduziu o debate e os processos de reforma previdenciária, levando-se em consideração o contexto histórico, as pressões internas e externas, e também a incerteza que envolve transformações em torno de uma questão tão abrangente, cuja natureza, para além do aspecto técnico, é também social e política, como a previdência social.
Resumo:
[ES]En este trabajo se estudia el uso de los marcadores del discurso y del asíndeton como medios de articulación textual entre los diversos enunciados que constituyen los "Progumnásmata" de Nicolao. Este estudio permite observar si existen diferencias entre las dos partes que componen la edición de Felten y si el uso de partículas de Nicolao es diferente del que hacen los demás autores de "Progumnásmata".
Resumo:
Raquel Merino Álvarez, José Miguel Santamaría, Eterio Pajares (eds.)
Resumo:
[EN]Measuring semantic similarity and relatedness between textual items (words, sentences, paragraphs or even documents) is a very important research area in Natural Language Processing (NLP). In fact, it has many practical applications in other NLP tasks. For instance, Word Sense Disambiguation, Textual Entailment, Paraphrase detection, Machine Translation, Summarization and other related tasks such as Information Retrieval or Question Answering. In this masther thesis we study di erent approaches to compute the semantic similarity between textual items. In the framework of the european PATHS project1, we also evaluate a knowledge-base method on a dataset of cultural item descriptions. Additionaly, we describe the work carried out for the Semantic Textual Similarity (STS) shared task of SemEval-2012. This work has involved supporting the creation of datasets for similarity tasks, as well as the organization of the task itself.
Resumo:
Esta tese tem como objetivo apresentar uma nova atitude diante do ensino de produção de textos. Trata-se do resultado de uma experiência didático-pedagógica cuja meta é deflagrar nos discentes a competência em produção textual. Então, são descritas técnicas que, explorando as várias linguagens e códigos, estimulam os discentes à expressão verbal, em especial, à produção de textos escritos. Baseadas em pressupostos semiótico-linguísticos, as dinâmicas utilizadas nas aulas criam um espaço no qual a produção de textos se dá de forma lúdica, atraente, longe dos bloqueios que normalmente impedem que os alunos sejam proficientes na interação sociocomunicativa e, especificamente, na produção textual escrita em diferentes gêneros textuais. As três técnicas que originaram esta tese integram um conjunto de quinze propostas de atividades reunidas sob o título de Técnicas de Comunicação e Expressão TCE. Tais técnicas buscam desinibir e promover a expressão verbal escrita, em especial. TCE (ou a eletiva Semiótica & Linguagem) surge como um novo paradigma no ensino de produção de textos, trazendo, para os futuros professores, elementos motivadores para a prática textual, de forma a dinamizar esse momento que, quase sempre, é sinônimo de tortura, medo, insegurança e, consequentemente, fracasso