886 resultados para Localization real-world challenges
Resumo:
Topic detection and tracking (TDT) is an area of information retrieval research the focus of which revolves around news events. The problems TDT deals with relate to segmenting news text into cohesive stories, detecting something new, previously unreported, tracking the development of a previously reported event, and grouping together news that discuss the same event. The performance of the traditional information retrieval techniques based on full-text similarity has remained inadequate for online production systems. It has been difficult to make the distinction between same and similar events. In this work, we explore ways of representing and comparing news documents in order to detect new events and track their development. First, however, we put forward a conceptual analysis of the notions of topic and event. The purpose is to clarify the terminology and align it with the process of news-making and the tradition of story-telling. Second, we present a framework for document similarity that is based on semantic classes, i.e., groups of words with similar meaning. We adopt people, organizations, and locations as semantic classes in addition to general terms. As each semantic class can be assigned its own similarity measure, document similarity can make use of ontologies, e.g., geographical taxonomies. The documents are compared class-wise, and the outcome is a weighted combination of class-wise similarities. Third, we incorporate temporal information into document similarity. We formalize the natural language temporal expressions occurring in the text, and use them to anchor the rest of the terms onto the time-line. Upon comparing documents for event-based similarity, we look not only at matching terms, but also how near their anchors are on the time-line. Fourth, we experiment with an adaptive variant of the semantic class similarity system. The news reflect changes in the real world, and in order to keep up, the system has to change its behavior based on the contents of the news stream. We put forward two strategies for rebuilding the topic representations and report experiment results. We run experiments with three annotated TDT corpora. The use of semantic classes increased the effectiveness of topic tracking by 10-30\% depending on the experimental setup. The gain in spotting new events remained lower, around 3-4\%. The anchoring the text to a time-line based on the temporal expressions gave a further 10\% increase the effectiveness of topic tracking. The gains in detecting new events, again, remained smaller. The adaptive systems did not improve the tracking results.
Resumo:
Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.
Resumo:
Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.
Resumo:
In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.
Resumo:
Major infrastructure and construction (MIC) projects are those with significant traffic or environmental impact, of strategic and regional significance and high sensitivity. The decision making process of schemes of this type is becoming ever more complicated, especially with the increasing number of stakeholders involved and their growing tendency to defend their own varied interests. Failing to address and meet the concerns and expectations of stakeholders may result in project failures. To avoid this necessitates a systematic participatory approach to facilitate decision-making. Though numerous decision models have been established in previous studies (e.g. ELECTRE methods, the analytic hierarchy process and analytic network process) their applicability in the decision process during stakeholder participation in contemporary MIC projects is still uncertain. To resolve this, the decision rule approach is employed for modeling multi-stakeholder multi-objective project decisions. Through this, the result is obtained naturally according to the “rules” accepted by any stakeholder involved. In this sense, consensus is more likely to be achieved since the process is more convincing and the result is easier to be accepted by all concerned. Appropriate “rules”, comprehensive enough to address multiple objectives while straightforward enough to be understood by multiple stakeholders, are set for resolving conflict and facilitating consensus during the project decision process. The West Kowloon Cultural District (WKCD) project is used as a demonstration case and a focus group meeting is conducted in order to confirm the validity of the model established. The results indicate that the model is objective, reliable and practical enough to cope with real world problems. Finally, a suggested future research agenda is provided.
Resumo:
Mobile RFID services for the Internet of Things can be created by using RFID as an enabling technology in mobile devices. Humans, devices, and things are the content providers and users of these services. Mobile RFID services can be either provided on mobile devices as stand-alone services or combined with end-to-end systems. When different service solution scenarios are considered, there are more than one possible architectural solution in the network, mobile, and back-end server areas. Combining the solutions wisely by applying the software architecture and engineering principles, a combined solution can be formulated for certain application specific use cases. This thesis illustrates these ideas. It also shows how generally the solutions can be used in real world use case scenarios. A case study is used to add further evidence.
Resumo:
Improving the availability, accessibility and affordability of healthy food equitably is fundamental to improving nutrition and health. While theoretical models abound, in real world complex systems rarely are there opportunities to address leverage points systematically to improve food supply. This presentation describes efforts over the last 30 years to do just that by remote Australian Aboriginal communities, where a single community store is usually the major dietary source. Areas addressed include store governance and infrastructure, wholesale supply, transport and pricing policies including cross-subsidization. However, while there have been dramatic improvements in the availability, quality and price of fruit, vegetables and most other healthy foods over this time, the proportion of communities' energy intake from energy-dense nutrient-poor foods and drinks has increased. One cause may be the disproportionate increase in supply of unhealthy choices in terms of variety and shelf-space, consistent with changes in the food supply in broader Australia. The impact of changing social and environmental factors, food preferences and price elasticity will also be explored briefly. Clearly much more needs to be done to reduce the high prevalence of diet-related chronic disease in some vulnerable groups. In particular, efforts to continually improve the availability and affordability of healthy food also need to address the predominance of unhealthy choices in the food supply.
Resumo:
Ongoing habitat loss and fragmentation threaten much of the biodiversity that we know today. As such, conservation efforts are required if we want to protect biodiversity. Conservation budgets are typically tight, making the cost-effective selection of protected areas difficult. Therefore, reserve design methods have been developed to identify sets of sites, that together represent the species of conservation interest in a cost-effective manner. To be able to select reserve networks, data on species distributions is needed. Such data is often incomplete, but species habitat distribution models (SHDMs) can be used to link the occurrence of the species at the surveyed sites to the environmental conditions at these locations (e.g. climatic, vegetation and soil conditions). The probability of the species occurring at unvisited location is next predicted by the model, based on the environmental conditions of those sites. The spatial configuration of reserve networks is important, because habitat loss around reserves can influence the persistence of species inside the network. Since species differ in their requirements for network configuration, the spatial cohesion of networks needs to be species-specific. A way to account for species-specific requirements is to use spatial variables in SHDMs. Spatial SHDMs allow the evaluation of the effect of reserve network configuration on the probability of occurrence of the species inside the network. Even though reserves are important for conservation, they are not the only option available to conservation planners. To enhance or maintain habitat quality, restoration or maintenance measures are sometimes required. As a result, the number of conservation options per site increases. Currently available reserve selection tools do however not offer the ability to handle multiple, alternative options per site. This thesis extends the existing methodology for reserve design, by offering methods to identify cost-effective conservation planning solutions when multiple, alternative conservation options are available per site. Although restoration and maintenance measures are beneficial to certain species, they can be harmful to other species with different requirements. This introduces trade-offs between species when identifying which conservation action is best applied to which site. The thesis describes how the strength of such trade-offs can be identified, which is useful for assessing consequences of conservation decisions regarding species priorities and budget. Furthermore, the results of the thesis indicate that spatial SHDMs can be successfully used to account for species-specific requirements for spatial cohesion - in the reserve selection (single-option) context as well as in the multi-option context. Accounting for the spatial requirements of multiple species and allowing for several conservation options is however complicated, due to trade-offs in species requirements. It is also shown that spatial SHDMs can be successfully used for gaining information on factors that drive a species spatial distribution. Such information is valuable to conservation planning, as better knowledge on species requirements facilitates the design of networks for species persistence. This methods and results described in this thesis aim to improve species probabilities of persistence, by taking better account of species habitat and spatial requirements. Many real-world conservation planning problems are characterised by a variety of conservation options related to protection, restoration and maintenance of habitat. Planning tools therefore need to be able to incorporate multiple conservation options per site, in order to continue the search for cost-effective conservation planning solutions. Simultaneously, the spatial requirements of species need to be considered. The methods described in this thesis offer a starting point for combining these two relevant aspects of conservation planning.
Resumo:
Design based research (DBR) is an appropriate method for small scale educational research projects involving collaboration between teachers, students and researchers. It is particularly useful in collaborative projects where an intervention is implemented and evaluated in a grounded context. The intervention can be technological, or a new program required by policy changes. It can be applied to educational contexts, such as when English teachers undertake higher degree research projects in their own or others’ sites; or for academics working collaboratively as researchers with teams of teachers. In the case described here the paper shows that DBR is designed to make a difference in the real world contexts in which occurs.
Resumo:
It is observed in the real world that taxes matter for location decisions and that multinationals shift profits by transfer pricing. The US and Canada use so-called formula apportionment (FA) to tax corporate income, and the EU is debating a switch from separate accounting (SA) to FA. This paper develops a theoretical model that compares basic properties of FA to SA. The focal point of the analysis is how changes in tax rates affect capital formation, input choice, and transfer pricing, as well as on spillovers on tax revenue in other countries. The analysis shows that a move from SA to FA will not eliminate such spillovers and will, in cases identified in the paper, actually aggravate them.
Resumo:
Multi-document summarization addressing the problem of information overload has been widely utilized in the various real-world applications. Most of existing approaches adopt term-based representation for documents which limit the performance of multi-document summarization systems. In this paper, we proposed a novel pattern-based topic model (PBTMSum) for the task of the multi-document summarization. PBTMSum combining pattern mining techniques with LDA topic modelling could generate discriminative and semantic rich representations for topics and documents so that the most representative and non-redundant sentences can be selected to form a succinct and informative summary. Extensive experiments are conducted on the data of document understanding conference (DUC) 2007. The results prove the effectiveness and efficiency of our proposed approach.
Resumo:
This book investigates the ethical values that inform the global carbon integrity system, and reflects on alternative norms that could or should do so. The global carbon integrity system comprises the emerging international architecture being built to respond to the climate change. This architecture can be understood as an 'integrity system'- an inter-related set of institutions, governance arrangements, regulations and practices that work to ensure the system performs its role faithfully and effectively. This volume investigates the ways ethical values impact on where and how the integrity system works, where it fails, and how it can be improved. With a wide array of perspectives across many disciplines, including ethicists, philosophers, lawyers, governance experts and political theorists, the chapters seek to explore the positive values driving the global climate change processes, to offer an understanding of the motivations justifying the creation of the regime and the way that social norms impact upon the operation of the integrity system. The collection focuses on the nexus between ideal ethics and real-world implementation through institutions and laws. The book will be of interest to policy makers, climate change experts, carbon taxation regulators, academics, legal practitioners and researchers.
Resumo:
Undergraduate Medical Imaging (MI)students at QUT attend their first clinical placement towards the end of semester two. Students undertake two (pre)clinical skills development units – one theory and one practical. Students gain good contextual and theoretical knowledge during these units via a blended learning model with multiple learning methods employed. Students attend theory lectures, practical sessions, tutorial sessions in both a simulated and virtual environment and also attend pre-clinical scenario based tutorial sessions. The aim of this project is to evaluate the use of blended learning in the context of 1st year Medical Imaging Radiographic Technique and its effectiveness in preparing students for their first clinical experience. It is hoped that the multiple teaching methods employed within the pre-clinical training unit at QUT builds students clinical skills prior to the real situation. A quantitative approach will be taken, evaluating via pre and post clinical placement surveys. This data will be correlated with data gained in the previous year on the effectiveness of this training approach prior to clinical placement. In 2014 59 students were surveyed prior to their clinical placement demonstrated positive benefits of using a variety of learning tools to enhance their learning. 98.31%(n=58)of students agreed or strongly agreed that the theory lectures were a useful tool to enhance their learning. This was followed closely by 97% (n=57) of the students realising the value of performing role-play simulation prior to clinical placement. Tutorial engagement was considered useful for 93.22% (n=55) whilst 88.14% (n=52) reasoned that the x-raying of phantoms in the simulated radiographic laboratory was beneficial. Self-directed learning yielded 86.44% (n=51). The virtual reality simulation software was valuable for 72.41% (n=42) of the students. Of the 4 students that disagreed or strongly disagreed with the usefulness of any tool they strongly agreed to the usefulness of a minimum of one other learning tool. The impact of the blended learning model to meet diverse student needs continues to be positive with students engaging in most offerings. Students largely prefer pre -clinical scenario based practical and tutorial sessions where 'real-world’ situations are discussed.
Resumo:
From Kurt Vonnegut to Stephen King, many novelists use metanarrative techniques to insert fictional versions of themselves in the stories they tell. The function of deploying such techniques is often to draw attention to the liminal space between the fictional constructs inherent in the novel as a form, and the real world from which the constructs draw inspiration, and indeed, are read by an audience. For emerging writers working in short form narratives, however, the structural demands of the short story or flash fiction make the use of similar techniques problematic in the level of depth to which they can be deployed. ‘Oh Holly, the fish is dead’ is the fourth in a series of short stories that work to overcome the structural limitations of a succinct form by developing a fractured fictional version of the author over a number of pieces and published across a range of sites. The accumulative affect is a richer metanarrative textual arrangement that also allows for the individual short stories to function independently.
Resumo:
From Kurt Vonnegut to Stephen King, many novelists use metanarrative techniques to insert fictional versions of themselves in the stories they tell. The function of deploying such techniques is often to draw attention to the liminal space between the fictional constructs inherent in the novel as a form, and the real world from which the constructs draw inspiration, and indeed, are read by an audience. For emerging writers working in short form narratives, however, the structural demands of the short story or flash fiction make the use of similar techniques problematic in the level of depth to which they can be deployed. ‘Swing Low’ is the fifth in a series of short stories that work to overcome the structural limitations of a succinct form by developing a fractured fictional version of the author over a number of pieces and published across a range of sites. The accumulative affect is a richer metanarrative textual arrangement that also allows for the individual short stories to function independently.