866 resultados para Dipl.-Wi.-Ing. Guido Gravenkötter
Resumo:
This paper presents the results of task 3 of the ShARe/CLEF eHealth Evaluation Lab 2013. This evaluation lab focuses on improving access to medical information on the web. The task objective was to investigate the effect of using additional information such as the discharge summaries and external resources such as medical ontologies on the IR effectiveness. The participants were allowed to submit up to seven runs, one mandatory run using no additional information or external resources, and three each using or not using discharge summaries.
Resumo:
User-generated content plays a pivotal role in the current social media. The main focus, however, has been on the explicitly generated user content such as photos, videos and status updates on different social networking sites. In this paper, we explore the potential of implicitly generated user content, based on users’ online consumption behaviors. It is technically feasible to record users’ consumption behaviors on mobile devices and share that with relevant people. Mobile devices with such capabilities could enrich social interactions around the consumed content, but it may also threaten users’ privacy. To understand the potentials of this design direction we created and evaluated a low-fidelity prototype intended for photo sharing within private groups. Our prototype incorporates two design concepts, namely, FingerPrint and MoodPhotos that leverage users’ consumption history and emotional responses. In this paper, we report user values and user acceptance of this prototype from three participatory design workshops.
Resumo:
New parents cherish photos of their children. In their homes one can observe a varied set of arrangements of their young ones' photos. We studied eight families with young children to learn about their practices related to photos. We provide preliminary results from the field study and elaborate on three interesting themes that came out very strongly from our data: physical platforms; family dynamics and values; and creative uses of photos. These themes provide an insight into families' perceived values for photo curating, displaying and experiencing them over a longer period. We provide future directions for supporting practices surrounding children's photos.
Resumo:
The sentencing of a self-confessed child sex offender and senior Brisbane Anglican priest Canon Barry Greaves in Brisbane District Court last Friday (April 24, 2009) is a significant event for many reasons and for many people. It is a significant event because Greaves was a priest at Boonah in the early 1980s when he committed the offences and because knowledge of his own sex offending against children failed to deter him from seeking and gaining high office in the Anglican Church. He accepted the position of being an Archbishop’s chaplain to Brisbane Archbishop Dr Peter Hollingworth in 1999. He stayed on as an Archbishop’s chaplain to the incoming Archbishop Dr Phillip Aspinall in 2002 and not even the disgrace of the sex scandal in the Brisbane Diocese resulted in a glimmer of guilt that maybe he was not an appropriate person to be providing pastoral care to other victims of sexual assault. Families of victims who were referred to Greaves for pastoral care are now flabbergasted by the double betrayal. “I went looking for comfort and now I discover I was confiding in a f***ing pedophile,” one woman said.
Resumo:
Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.
Resumo:
In the field of information retrieval (IR), researchers and practitioners are often faced with a demand for valid approaches to evaluate the performance of retrieval systems. The Cranfield experiment paradigm has been dominant for the in-vitro evaluation of IR systems. Alternative to this paradigm, laboratory-based user studies have been widely used to evaluate interactive information retrieval (IIR) systems, and at the same time investigate users’ information searching behaviours. Major drawbacks of laboratory-based user studies for evaluating IIR systems include the high monetary and temporal costs involved in setting up and running those experiments, the lack of heterogeneity amongst the user population and the limited scale of the experiments, which usually involve a relatively restricted set of users. In this paper, we propose an alternative experimental methodology to laboratory-based user studies. Our novel experimental methodology uses a crowdsourcing platform as a means of engaging study participants. Through crowdsourcing, our experimental methodology can capture user interactions and searching behaviours at a lower cost, with more data, and within a shorter period than traditional laboratory-based user studies, and therefore can be used to assess the performances of IIR systems. In this article, we show the characteristic differences of our approach with respect to traditional IIR experimental and evaluation procedures. We also perform a use case study comparing crowdsourcing-based evaluation with laboratory-based evaluation of IIR systems, which can serve as a tutorial for setting up crowdsourcing-based IIR evaluations.
Resumo:
Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.
Resumo:
Background Cancer monitoring and prevention relies on the critical aspect of timely notification of cancer cases. However, the abstraction and classification of cancer from the free-text of pathology reports and other relevant documents, such as death certificates, exist as complex and time-consuming activities. Aims In this paper, approaches for the automatic detection of notifiable cancer cases as the cause of death from free-text death certificates supplied to Cancer Registries are investigated. Method A number of machine learning classifiers were studied. Features were extracted using natural language techniques and the Medtex toolkit. The numerous features encompassed stemmed words, bi-grams, and concepts from the SNOMED CT medical terminology. The baseline consisted of a keyword spotter using keywords extracted from the long description of ICD-10 cancer related codes. Results Death certificates with notifiable cancer listed as the cause of death can be effectively identified with the methods studied in this paper. A Support Vector Machine (SVM) classifier achieved best performance with an overall F-measure of 0.9866 when evaluated on a set of 5,000 free-text death certificates using the token stem feature set. The SNOMED CT concept plus token stem feature set reached the lowest variance (0.0032) and false negative rate (0.0297) while achieving an F-measure of 0.9864. The SVM classifier accounts for the first 18 of the top 40 evaluated runs, and entails the most robust classifier with a variance of 0.001141, half the variance of the other classifiers. Conclusion The selection of features significantly produced the most influences on the performance of the classifiers, although the type of classifier employed also affects performance. In contrast, the feature weighting schema created a negligible effect on performance. Specifically, it is found that stemmed tokens with or without SNOMED CT concepts create the most effective feature when combined with an SVM classifier.
Resumo:
The role of Bone Tissue Engineering in the field of Regenerative Medicine has been the topic of substantial research over the past two decades. Technological advances have improved orthopaedic implants and surgical techniques for bone reconstruction. However, improvements in surgical techniques to reconstruct bone have been limited by the paucity of autologous materials available and donor site morbidity. Recent advances in the development of biomaterials have provided attractive alternatives to bone grafting expanding the surgical options for restoring the form and function of injured bone. Specifically, novel bioactive (second generation) biomaterials have been developed that are characterised by controlled action and reaction to the host tissue environment, whilst exhibiting controlled chemical breakdown and resorption with an ultimate replacement by regenerating tissue. Future generations of biomaterials (third generation) are designed to be not only osteo- conductive but also osteoinductive, i.e. to stimulate regeneration of host tissues by combining tissue engineer- ing and in situ tissue regeneration methods with a focus on novel applications. These techniques will lead to novel possibilities for tissue regeneration and repair. At present, tissue engineered constructs that may find future use as bone grafts for complex skeletal defects, whether from post-traumatic, degenerative, neoplastic or congenital/developmental “origin” require osseous reconstruction to ensure structural and functional integrity. Engineering functional bone using combinations of cells, scaffolds and bioactive factors is a promising strategy and a particular feature for future development in the area of hybrid materials which are able to exhibit suitable biomimetic and mechanical properties. This review will discuss the state of the art in this field and what we can expect from future generations of bone regeneration concepts.
Resumo:
This paper discusses a model of the civil aviation reg- ulation framework and shows how the current assess- ment of reliability and risk for piloted aircraft has limited applicability for Unmanned Aircraft Systems (UAS) with high levels of autonomous decision mak- ing. Then, a new framework for risk management of robust autonomy is proposed, which arises from combining quantified measures of risk with normative decision making. The term Robust Autonomy de- scribes the ability of an autonomous system to either continue or abort its operation whilst not breaching a minimum level of acceptable safety in the presence of anomalous conditions. The decision making associ- ated with risk management requires quantifying prob- abilities associated with the measures of risk and also consequences of outcomes related to the behaviour of autonomy. The probabilities are computed from an assessment under both nominal and anomalous sce- narios described by faults, which can be associated with the aircraft’s actuators, sensors, communication link, changes in dynamics, and the presence of other aircraft in the operational space. The consequences of outcomes are characterised by a loss function which rewards the certification decision
Resumo:
This paper describes the Teaching Teachers of the Future (TTF) Project – a national project funded ($8.8mil AUD) by the Australian Government. The project was aimed at building the capacity of student teachers to use technology to improve student learning outcomes. It discusses the aims and objectives of the project, its genesis in a changing educational and political landscape, the use of TPACK as a theoretical scaffold, and briefly reports on the operations of the various components and part-ners. Further, it discusses the research opportunities afforded by the project includ-ing a national survey of all PSTs in Australia gauging their TPACK confidence and the use of the Most Significant Change (MSC) methodology. Finally the paper dis-cusses the outcomes of the project and its future.
Resumo:
For the past decade, at least, varieties of small, hand held networked instruments have appeared on the global scene, selling in record numbers, and being utilized by all manner of persons from the old to the young; children, women, men, the wealthy and the poor and in all countries. Their presences bespeak a radical shift in telecommunications infrastructure and the future of communications. They are particularly visible in urban areas where mobile transmission network infrastructure (3G, 4G, cellular and Wi-Fi) is more established and substantial, options more plentiful, and density of populations more dramatic. These end user products—I phones, cell phones, Blackberries, DSi, DS, IPads, Zooms, and others – of the mobile communications industry are the latest, hottest globalized commodities. At the same time, wirelessness, or the state of being wireless, and therefore capable of taking along one's networks, communicating from unlikely spaces, and navigating with GPS, is a complex social, political and economic communications phenomenon of early 21st century life. This thesis examines the specter of being wireless in cities. It lends the entire idea an experimentally envisioned, historical and planned context wherein personalization of media tools is seen both as a design development of corporate, artistic, and military imagination, as well as a profound social phenomenon enabling new forms of sharing, belonging, and urban community. In doing that it asserts the parameters of a new mobile space which, aside from clear benefits to humankind by way of mobility, has reinscribed numerous categories including gender. Moreover, it posits the recognition of other, more nuanced theoretical spaces for complex readings of gender and gendered use, including some instantiation of the notion of 'network' itself as a cyborgian and gendered social form. Additionally, cities are studied as places where technology is not only quickly popularized, but is connected to larger political interests, such as the reading of data, tracking of information, and the new security culture. In so doing the work has been undertaken as an urban spatial analysis and experimental ethnography, utilizing architectural, feminist, techno-utopian, industrial and theoretical literatures as discursive underpinnings from whence understandings and interpretations of mobile space, the mobile office, networked mobility, and personal media have come, linking the space of cities to specific, pioneering urban public art projects in which voice, texting and MMS have been utilized in expressions of ubiquitous networks and urban history. Through numerous examples of techno art, the thesis discusses the 'wireless city' as an emerging cultural, socially constructed economic and spatial entity, both conceived and formed through historic processes of urbanization.
Resumo:
We present a study to understand the effect that negated terms (e.g., "no fever") and family history (e.g., "family history of diabetes") have on searching clinical records. Our analysis is aimed at devising the most effective means of handling negation and family history. In doing so, we explicitly represent a clinical record according to its different content types: negated, family history and normal content; the retrieval model weights each of these separately. Empirical evaluation shows that overall the presence of negation harms retrieval effectiveness while family history has little effect. We show negation is best handled by weighting negated content (rather than the common practise of removing or replacing it). However, we also show that many queries benefit from the inclusion of negated content and that negation is optimally handled on a per-query basis. Additional evaluation shows that adaptive handing of negated and family history content can have significant benefits.
Resumo:
Relevation! is a system for performing relevance judgements for information retrieval evaluation. Relevation! is web-based, fully configurable and expandable; it allows researchers to effectively collect assessments and additional qualitative data. The system is easily deployed allowing assessors to smoothly perform their relevance judging tasks, even remotely. Relevation! is available as an open source project at: http://ielab.github.io/relevation.
Resumo:
The top-k retrieval problem aims to find the optimal set of k documents from a number of relevant documents given the user’s query. The key issue is to balance the relevance and diversity of the top-k search results. In this paper, we address this problem using Facility Location Analysis taken from Operations Research, where the locations of facilities are optimally chosen according to some criteria. We show how this analysis technique is a generalization of state-of-the-art retrieval models for diversification (such as the Modern Portfolio Theory for Information Retrieval), which treat the top-k search results like “obnoxious facilities” that should be dispersed as far as possible from each other. However, Facility Location Analysis suggests that the top-k search results could be treated like “desirable facilities” to be placed as close as possible to their customers. This leads to a new top-k retrieval model where the best representatives of the relevant documents are selected. In a series of experiments conducted on two TREC diversity collections, we show that significant improvements can be made over the current state-of-the-art through this alternative treatment of the top-k retrieval problem.