16 resultados para Peucker, Eduard vonPeucker, Eduard vonEduardPeuckervon
em Queensland University of Technology - ePrints Archive
Resumo:
Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
Over the past ten years, minimally invasive plate osteosynthesis (MIPO) for the fixation of long bone fractures has become a clinically accepted method with good outcomes, when compared to the conventional open surgical approach (open reduction internal fixation, ORIF). However, while MIPO offers some advantages over ORIF, it also has some significant drawbacks, such as a more demanding surgical technique and increased radiation exposure. No clinical or experimental study to date has shown a difference between the healing outcomes in fractures treated with the two surgical approaches. Therefore, a novel, standardised severe trauma model in sheep has been developed and validated in this project to examine the effect of the two surgical approaches on soft tissue and fracture healing. Twenty four sheep were subjected to severe soft tissue damage and a complex distal femur fracture. The fractures were initially stabilised with an external fixator. After five days of soft tissue recovery, internal fixation with a plate was applied, randomised to either MIPO or ORIF. Within the first fourteen days, the soft tissue damage was monitored locally with a compartment pressure sensor and systemically by blood tests. The fracture progress was assessed fortnightly by x-rays. The sheep were sacrificed in two groups after four and eight weeks, and CT scans and mechanical testing performed. Soft tissue monitoring showed significantly higher postoperative Creatine Kinase and Lactate Dehydrogenase values in the ORIF group compared to MIPO. After four weeks, the torsional stiffness was significantly higher in the MIPO group (p=0.018) compared to the ORIF group. The torsional strength also showed increased values for the MIPO technique (p=0.11). The measured total mineralised callus volumes were slightly higher in the ORIF group. However, a newly developed morphological callus bridging score showed significantly higher values for the MIPO technique (p=0.007), with a high correlation to the mechanical properties (R2=0.79). After eight weeks, the same trends continued, but without statistical significance. In summary, this clinically relevant study, using the newly developed severe trauma model in sheep, clearly demonstrates that the minimally invasive technique minimises additional soft tissue damage and improves fracture healing in the early stage compared to the open surgical approach method.
Resumo:
Recently, user tagging systems have grown in popularity on the web. The tagging process is quite simple for ordinary users, which contributes to its popularity. However, free vocabulary has lack of standardization and semantic ambiguity. It is possible to capture the semantics from user tagging and represent those in a form of ontology, but the application of the learned ontology for recommendation making has not been that flourishing. In this paper we discuss our approach to learn domain ontology from user tagging information and apply the extracted tag ontology in a pilot tag recommendation experiment. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.
Resumo:
Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.
Resumo:
Intuitively, any ‘bag of words’ approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document’s initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur’s search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
A new model to study healing of a complex femur fracture with concurrent soft tissue injury in sheep
Resumo:
High energy bone fractures resulting from impact trauma are often accompanied by subcutaneous soft tissue injuries, even if the skin remains intact. There is evidence that such closed soft tissue injuries affect the healing of bone fractures, and vice versa. Despite this knowledge, most impact trauma studies in animals have focussed on bone fractures or soft tissue trauma in isolation. However, given the simultaneous impact on both tissues a better understanding of the interaction between these two injuries is necessary to optimise clinical treatment. The aim of this study was therefore to develop a new experimental model and characterise, for the first time, the healing of a complex fracture with concurrent closed soft tissue trauma in sheep. A pendulum impact device was designed to deliver a defined and standardised impact to the distal thigh of sheep, causing a reproducible contusion injury to the subcutaneous soft tissues. In a subsequent procedure, a reproducible femoral butterfly fracture (AO C3-type) was created at the sheep’s femur, which was initially stabilised for 5 days by an external fixator construct to allow for soft tissue swelling to recede, and ultimately in a bridging construct using locking plates. The combined injuries were applied to twelve sheep and the healing observed for four or eight weeks (six animals per group) until sacrifice. The pendulum impact led to a moderate to severe circumferential soft tissue injury with significant bruising, haematomas and partial muscle disruptions. Posttraumatic measurements showed elevated intra-compartmental pressure and circulatory tissue breakdown markers, with recovery to normal, pre-injury values within four days. Clinically, no neurovascular deficiencies were observed. Bi-weekly radiological analysis of the healing fractures showed progressive callus healing over time, with the average number of callus bridges increasing from 0.4 at two weeks to 4.2 at eight weeks. Biomechanical testing after sacrifice showed increasing torsional stiffness between four and eight weeks healing time from 10% to 100%, and increasing ultimate torsional strength from 10% to 64% (relative to the contralateral control limb). Our results demonstrate the robust healing of a complex femur fracture in the presence of a severe soft tissue contusion injury in sheep and demonstrate the establishment of a clinically relevant experimental model, for research aimed at improving the treatment of bone fractures accompanied by closed soft tissue injuries.
Resumo:
In most intent recognition studies, annotations of query intent are created post hoc by external assessors who are not the searchers themselves. It is important for the field to get a better understanding of the quality of this process as an approximation for determining the searcher's actual intent. Some studies have investigated the reliability of the query intent annotation process by measuring the interassessor agreement. However, these studies did not measure the validity of the judgments, that is, to what extent the annotations match the searcher's actual intent. In this study, we asked both the searchers themselves and external assessors to classify queries using the same intent classification scheme. We show that of the seven dimensions in our intent classification scheme, four can reliably be used for query annotation. Of these four, only the annotations on the topic and spatial sensitivity dimension are valid when compared with the searcher's annotations. The difference between the interassessor agreement and the assessor-searcher agreement was significant on all dimensions, showing that the agreement between external assessors is not a good estimator of the validity of the intent classifications. Therefore, we encourage the research community to consider using query intent classifications by the searchers themselves as test data.
Resumo:
A fear of imminent information overload predates the World Wide Web by decades. Yet, that fear has never abated. Worse, as the World Wide Web today takes the lion’s share of the information we deal with, both in amount and in time spent gathering it, the situation has only become more precarious. This chapter analyses new issues in information overload that have emerged with the advent of the Web, which emphasizes written communication, defined in this context as the exchange of ideas expressed informally, often casually, as in verbal language. The chapter focuses on three ways to mitigate these issues. First, it helps us, the users, to be more specific in what we ask for. Second, it helps us amend our request when we don't get what we think we asked for. And third, since only we, the human users, can judge whether the information received is what we want, it makes retrieval techniques more effective by basing them on how humans structure information. This chapter reports on extensive experiments we conducted in all three areas. First, to let users be more specific in describing an information need, they were allowed to express themselves in an unrestricted conversational style. This way, they could convey their information need as if they were talking to a fellow human instead of using the two or three words typically supplied to a search engine. Second, users were provided with effective ways to zoom in on the desired information once potentially relevant information became available. Third, a variety of experiments focused on the search engine itself as the mediator between request and delivery of information. All examples that are explained in detail have actually been implemented. The results of our experiments demonstrate how a human-centered approach can reduce information overload in an area that grows in importance with each day that passes. By actually having built these applications, I present an operational, not just aspirational approach.
Resumo:
This study demonstrates a novel method for testing the hypothesis that variations in primary and secondary particle number concentration (PNC) in urban air are related to residual fuel oil combustion at a coastal port lying 30 km upwind, by examining the correlation between PNC and airborne particle composition signatures chosen for their sensitivity to the elemental contaminants present in residual fuel oil. Residual fuel oil combustion indicators were chosen by comparing the sensitivity of a range of concentration ratios to airborne emissions originating from the port. The most responsive were combinations of vanadium and sulfur concentration ([S], [V]) expressed as ratios with respect to black carbon concentration ([BC]). These correlated significantly with ship activity at the port and with the fraction of time during which the wind blew from the port. The average [V] when the wind was predominantly from the port was 0.52 ng.m-3 (87%) higher than the average for all wind directions and 0.83 ng.m-3 (280%) higher than that for the lowest vanadium yielding wind direction considered to approximate the natural background. Shipping was found to be the main source of V impacting urban air quality in Brisbane. However, contrary to the stated hypothesis, increases in PNC related measures did not correlate with ship emission indicators or ship traffic. Hence at this site ship emissions were not found to be a major contributor to PNC compared to other fossil fuel combustion sources such as road traffic, airport and refinery emissions.
Resumo:
Currently, there is a limited understanding of the sources of ambient fine particles that contribute to the exposure of children at urban schools. Since the size and chemical composition of airborne particle are key parameters for determining the source as well as toxicity, PM1 particles (mass concentration of particles with an aerodynamic diameter less than 1 µm) were collected at 24 urban schools in Brisbane, Australia and their elemental composition determined. Based on the elemental composition four main sources were identified; secondary sulphates, biomass burning, vehicle and industrial emissions. The largest contributing source was industrial emissions and this was considered as the main source of trace elements in the PM1 that children were exposed to at school. PM1 concentrations at the schools were compared to the elemental composition of the PM2.5 particles (mass concentration of particles with an aerodynamic diameter less than 2.5 µm) from a previous study conducted at a suburban and roadside site in Brisbane. This comparison revealed that the more toxic heavy metals (V, Cr, Ni, Cu, Zn and Pb), mostly from vehicle and industrial emissions, were predominantly in the PM1 fraction. Thus, the results from this study points to PM1 as a potentially better particle size fraction for investigating the health effects of airborne particles.
Resumo:
Typing 2 or 3 keywords into a browser has become an easy and efficient way to find information. Yet, typing even short queries becomes tedious on ever shrinking (virtual) keyboards. Meanwhile, speech processing is maturing rapidly, facilitating everyday language input. Also, wearable technology can inform users proactively by listening in on their conversations or processing their social media interactions. Given these developments, everyday language may soon become the new input of choice. We present an information retrieval (IR) algorithm specifically designed to accept everyday language. It integrates two paradigms of information retrieval, previously studied in isolation; one directed mainly at the surface structure of language, the other primarily at the underlying meaning. The integration was achieved by a Markov machine that encodes meaning by its transition graph, and surface structure by the language it generates. A rigorous evaluation of the approach showed, first, that it can compete with the quality of existing language models, second, that it is more effective the more verbose the input, and third, as a consequence, that it is promising for an imminent transition from keyword input, where the onus is on the user to formulate concise queries, to a modality where users can express more freely, more informal, and more natural their need for information in everyday language.
Resumo:
There is an increased interest in measuring the amount of greenhouse gases produced by farming practices . This paper describes an integrated solar powered Unmanned Air Vehicles (UAV) and Wireless Sensor Network (WSN) gas sensing system for greenhouse gas emissions in agricultural lands. The system uses a generic gas sensing system for CH4 and CO2 concentrations using metal oxide (MoX) and non-dispersive infrared sensors, and a new solar cell encapsulation method to power the unmanned aerial system (UAS)as well as a data management platform to store, analyze and share the information with operators and external users. The system was successfully field tested at ground and low altitudes, collecting, storing and transmitting data in real time to a central node for analysis and 3D mapping. The system can be used in a wide range of outdoor applications at a relatively low operational cost. In particular, agricultural environments are increasingly subject to emissions mitigation policies. Accurate measurements of CH4 and CO2 with its temporal and spatial variability can provide farm managers key information to plan agricultural practices. A video of the bench and flight test performed can be seen in the following link: https://www.youtube.com/watch?v=Bwas7stYIxQ
Resumo:
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.