122 resultados para Anchoring heuristic


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predicate encryption (PE) is a new primitive which supports exible control over access to encrypted data. In PE schemes, users' decryption keys are associated with predicates f and ciphertexts encode attributes a that are specified during the encryption procedure. A user can successfully decrypt if and only if f(a) = 1. In this thesis, we will investigate several properties that are crucial to PE. We focus on expressiveness of PE, Revocable PE and Hierarchical PE (HPE) with forward security. For all proposed systems, we provide a security model and analysis using the widely accepted computational complexity approach. Our first contribution is to explore the expressiveness of PE. Existing PE supports a wide class of predicates such as conjunctions of equality, comparison and subset queries, disjunctions of equality queries, and more generally, arbitrary combinations of conjunctive and disjunctive equality queries. We advance PE to evaluate more expressive predicates, e.g., disjunctive comparison or disjunctive subset queries. Such expressiveness is achieved at the cost of computational and space overhead. To improve the performance, we appropriately revise the PE to reduce the computational and space cost. Furthermore, we propose a heuristic method to reduce disjunctions in the predicates. Our schemes are proved in the standard model. We then introduce the concept of Revocable Predicate Encryption (RPE), which extends the previous PE setting with revocation support: private keys can be used to decrypt an RPE ciphertext only if they match the decryption policy (defined via attributes encoded into the ciphertext and predicates associated with private keys) and were not revoked by the time the ciphertext was created. We propose two RPE schemes. Our first scheme, termed Attribute- Hiding RPE (AH-RPE), offers attribute-hiding, which is the standard PE property. Our second scheme, termed Full-Hiding RPE (FH-RPE), offers even stronger privacy guarantees, i.e., apart from possessing the Attribute-Hiding property, the scheme also ensures that no information about revoked users is leaked from a given ciphertext. The proposed schemes are also proved to be secure under well established assumptions in the standard model. Secrecy of decryption keys is an important pre-requisite for security of (H)PE and compromised private keys must be immediately replaced. The notion of Forward Security (FS) reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. We present the first Forward-Secure Hierarchical Predicate Encryption (FS-HPE) that is proved secure in the standard model. Our FS-HPE scheme offers some desirable properties: time-independent delegation of predicates (to support dynamic behavior for delegation of decrypting rights to new users), local update for users' private keys (i.e., no master authority needs to be contacted), forward security, and the scheme's encryption process does not require knowledge of predicates at any level including when those predicates join the hierarchy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research is an autoethnographic investigation of consumption experiences, public and quasi-public spaces, and their relationship to community within an inner city neighbourhood. The research specifically focuses on the gentrifying inner city, where class-based processes of change can have implications for people’s abilities to remain within, or feel connected to place. However, the thesis draws on broader theories of the throwntogetherness of the contemporary city (e.g., Amin and Thrift, 2002; Massey 2005) to argue that the city is a space where place-based meanings cannot be seen to be fixed, and are instead better understood as events of place – based on ever shifting interrelations between the trajectories of people and things. This perspective argues the experience of belonging to community is not just born of a social encounter, but also draws on the physical and symbolic elements of the context in which it is situated. The thesis particularly explores the ways people construct identifications within this shifting urban environment. As such, consumption practices and spaces offer one important lens through which to explore the interplay of the physical, social and symbolic. Consumer research tells us that consumption practices can facilitate experiences in which identity-defining meaning can be generated and shared. Consumption spaces can also support different kinds of collective identification – as anchoring realms for specific cultural groups or exposure realms that enable individuals to share in the identification practices of others with limited risk (Aubert-Gamet & Cova, 1999). Furthermore, the consumption-based lifestyles that gentrifying inner city neighbourhoods both support and encourage can also mean that consumption practices may be a key reason that people are moving through public space. That is, consumption practices and spaces may provide a purpose for which – and spatial frame against which – our everyday interactions and connections with people and objects are undertaken within such neighbourhoods. The purpose of this investigation then was to delve into the subjectivities at the heart of identifying with places, using the lens of our consumption-based experiences within them. The enquiry describes individual and collective identifications and emotional connections, and explores how these arise within and through our experiences within public and quasi-public spaces. It then theorises these ‘imaginings’ as representative of an experience of community. To do so, it draws on theories of imagination and its relation to community. Theories of imagined community remind us that both the values and identities of community are held together by projections that create relational links out of objects and shared practices (e.g., Benedict Anderson, 2006; Urry, 2000). Drawing on broader theories of the processes of the imagination, this thesis suggests that an interplay between reflexivity and fantasy – which are products of the critical and the fascinated consciousness – plays a role in this imagining of community (e.g., Brann, 1991; Ricoeur, 1994). This thesis therefore seeks to explore how these processes of imagining are implicated within the construction of an experience of belonging to neighbourhood-based community through consumption practices and the public and quasi-public spaces that frame them. The key question of this thesis is how do an individual’s consumption practices work to construct an imagined presence of neighbourhood-based community? Given the focus on public and quasi-public spaces and our experiences within them, the research also asked how do experiences in the public and quasi-public spaces that frame these practices contribute to the construction of this imagined presence? This investigation of imagining community through consumption practices is based on my own experiences of moving to, and attempting to construct community connections within, an inner city neighbourhood in Melbourne, Australia. To do so, I adopted autoethnographic methodology. This is because autoethnography provides the methodological tools through which one can explore and make visible the subjectivities inherent within the lived experiences of interest to the thesis (Ellis, 2004). I describe imagining community through consumption as an extension of a placebased self. This self is manifest through personal identification in consumption spaces that operate as anchoring realms for specific cultural groups, as well as through a broader imagining of spaces, people, and practices as connected through experiences within realms of exposure. However, this is a process that oscillates through cycles of identification; these anchor one within place personally, but also disrupt those attachments. This instability can force one to question the orientation and motives of these imaginings, and reframe them according to different spaces and reference groups in ways that can also work to construct a more anonymous and, conversely, more achievable collective identification. All the while, the ‘I’ at the heart of this identification is in an ongoing process of negotiation, and similarly, the imagined community is never complete. That is, imagining community is a negotiation, with people and spaces – but mostly with the different identifications of the self. This thesis has been undertaken by publication, and thus the process of imagining community is explored and described through four papers. Of these, the first two focus on specific types of consumption spaces – a bar and a shopping centre – and consider the ways that anchoring and exposure within these spaces support the process of imagining community. The third paper examines the ways that the public and quasi-public spaces that make up the broader neighbourhood context are themselves throwntogether as a realm of exposure, and considers the ways this shapes my imaginings of this neighbourhood as community. The final paper develops a theory of imagined community, as a process of comparison and contrast with imagined others, to provide a summative conceptualisation of the first three papers. The first paper, chapter five, explores this process of comparison and contrast in relation to authenticity, which in itself is a subjective assessment of identity. This chapter was written as a direct response to the recent work of Zukin (2010), and draws on theories of authenticity as applied to personal and collective identification practices by consumer researchers Arnould and Price (2000). In this chapter, I describe how my assessments of the authenticity of my anchoring experiences within one specific consumption space, a neighbourhood bar, are evaluated in comparison to my observations of and affective reactions to the social practices of another group of residents in a different consumption space, the local shopping centre. Chapter five also provides an overview of the key sites and experiences that are considered in more detail in the following two chapters. In chapter six, I again draw on my experiences within the bar introduced in chapter five, this time to explore the process of developing a regular identity within a specific consumption space. Addressing the popular theory of the cafe or bar as third place (Oldenburg, 1999), this paper considers the purpose of developing anchored relationships with people within specific consumption spaces, and explores the different ways this may be achieved in an urban context where the mobilities and lifestyle practices of residents complicate the idea of a consumption space as an anchoring or third place. In doing so, this chapter also considers the manner in which this type of regular identification may be seen to be the beginning of the process of imagining community. In chapter seven, I consider the ways the broader public spaces of the neighbourhood work cumulatively to expose different aspects of its identity by following my everyday movements through the neighbourhood’s shopping centre and main street. Drawing on the theories of Urry (2000), Massey (2005), and Amin (2007, 2008), this chapter describes how these spaces operate as exposure realms, enabling the expression of different senses of the neighbourhood’s spaces, times, cultures, and identities through their physical, social, and symbolic elements. Yet they also enable them to be united: through habitual pathways, group practices of appropriation of space, and memory traces that construct connections between objects and experiences. This chapter describes this as a process of exposure to these different elements. Our imagination begins to expand the scope of the frames onto which it projects an imagined presence; it searches for patterns within the physical, social, and symbolic environment and draws connections between people and practices across spaces. As the final paper, chapter eight, deduces, it is in making these connections that one constructs the objects and shared practices of imagined community. This chapter describes this as an imagining of neighbourhood as a place-based extension of the self, and then explores the ways in which I drew on physical, social, and symbolic elements in an attempt to construct a fit between the neighbourhood’s offerings and my desires for place-based identity definition. This was as a cumulative but fragmented process, in which positive and negative experiences of interaction and identification with people and things were searched for their potential to operate as the objects and shared practices of imagined community. This chapter describes these connections as constructed through interplay between reflexivity and fantasy, as the imagination seeks balance between desires for experiences of belonging, and the complexities of constructing them within the throwntogether context of the contemporary city. The conclusion of the thesis describes the process of imagining community as a reflexive fantasy, that is, as a product of both the critical and fascinated consciousness (Ricoeur, 1994). It suggests that the fascinated consciousness imbues experiences with hope and desire, which the reflexive imagining can turn to disappointment and shame as it critically reflects on the reality of those fascinated projections. At the same time, the reflexive imagination also searches the practices of others for affirmation of those projections, effectively seeking to prove the reality of the fantasy of the imagined community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research introduces the proposition that Electronic Dance Music’s beat-mixing function could be implemented to create immediacy in other musical genres. The inclusion of rhythmic sections at the beginning and end of each musical work created a ‘DJ friendly’ environment. The term used in this thesis to refer to the application of beat-mixing in Rock music is ‘ClubRock’. Collaboration between a number of DJs and Rock music professionals applied the process of beat-mixing to blend Rock tracks to produce a continuous ClubRock set. The DJ technique of beat-mixing Rock music transformed static renditions into a fluid creative work. The hybridisation of the two genres, EDM and Rock, resulted in a contribution to Rock music compositional approaches and the production of a unique Rock album; Manarays—Get Lucky.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the real world there are many problems in network of networks (NoNs) that can be abstracted to a so-called minimum interconnection cut problem, which is fundamentally different from those classical minimum cut problems in graph theory. Thus, it is desirable to propose an efficient and effective algorithm for the minimum interconnection cut problem. In this paper we formulate the problem in graph theory, transform it into a multi-objective and multi-constraint combinatorial optimization problem, and propose a hybrid genetic algorithm (HGA) for the problem. The HGA is a penalty-based genetic algorithm (GA) that incorporates an effective heuristic procedure to locally optimize the individuals in the population of the GA. The HGA has been implemented and evaluated by experiments. Experimental results have shown that the HGA is effective and efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evolutionary computation is an effective tool for solving optimization problems. However, its significant computational demand has limited its real-time and on-line applications, especially in embedded systems with limited computing resources, e.g., mobile robots. Heuristic methods such as the genetic algorithm (GA) based approaches have been investigated for robot path planning in dynamic environments. However, research on the simulated annealing (SA) algorithm, another popular evolutionary computation algorithm, for dynamic path planning is still limited mainly due to its high computational demand. An enhanced SA approach, which integrates two additional mathematical operators and initial path selection heuristics into the standard SA, is developed in this work for robot path planning in dynamic environments with both static and dynamic obstacles. It improves the computing performance of the standard SA significantly while giving an optimal or near-optimal robot path solution, making its real-time and on-line applications possible. Using the classic and deterministic Dijkstra algorithm as a benchmark, comprehensive case studies are carried out to demonstrate the performance of the enhanced SA and other SA algorithms in various dynamic path planning scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a method for analysing videogames based on game activities. It examines the impact of these activities on the player experience. The research approach applies heuristic checklists that deconstruct games in terms of cognitive processes that players engage in during gameplay (e.g., addressing goals, interpreting feedback). For this study we examined three puzzle games, Portal 2, I-Fluid and Braid. The Player Experience of Need Satisfaction (PENS) survey is used to measure player experience following gameplay. Cognitive action provided within games is examined in light of reported player experiences to determine the extent to which these activities influence players’ feelings of competence, autonomy, intuitive control and presence. Findings indicate that the positive experiences are directly influenced by game activity design. Our study also demonstrates the value of expert review in deconstructing gameplay activity as a means of providing direction for game design that enhances the player experience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasing global competition, rapid technological changes, advances in manufacturing and information technology and discerning customers are forcing supply chains to adopt improvement practices that enable them to deliver high quality products at a lower cost and in a shorter period of time. A lean initiative is one of the most effective approaches toward achieving this goal. In the lean improvement process, it is critical to measure current and desired performance level in order to clearly evaluate the lean implementation efforts. Many attempts have tried to measure supply chain performance incorporating both quantitative and qualitative measures but failed to provide an effective method of measuring improvements in performances for dynamic lean supply chain situations. Therefore, the necessity of appropriate measurement of lean supply chain performance has become imperative. There are many lean tools available for supply chains; however, effectiveness of a lean tool depends on the type of the product and supply chain. One tool may be highly effective for a supply chain involved in high volume products but may not be effective for low volume products. There is currently no systematic methodology available for selecting appropriate lean strategies based on the type of supply chain and market strategy This thesis develops an effective method to measure the performance of supply chain consisting of both quantitative and qualitative metrics and investigates the effects of product types and lean tool selection on the supply chain performance Supply chain performance matrices and the effects of various lean tools over performance metrics mentioned in the SCOR framework have been investigated. A lean supply chain model based on the SCOR metric framework is then developed where non- lean and lean as well as quantitative and qualitative metrics are incorporated in appropriate metrics. The values of appropriate metrics are converted into triangular fuzzy numbers using similarity rules and heuristic methods. Data have been collected from an apparel manufacturing company for multiple supply chain products and then a fuzzy based method is applied to measure the performance improvements in supply chains. Using the fuzzy TOPSIS method, which chooses an optimum alternative to maximise similarities with positive ideal solutions and to minimise similarities with negative ideal solutions, the performances of lean and non- lean supply chain situations for three different apparel products have been evaluated. To address the research questions related to effective performance evaluation method and the effects of lean tools over different types of supply chains; a conceptual framework and two hypotheses are investigated. Empirical results show that implementation of lean tools have significant effects over performance improvements in terms of time, quality and flexibility. Fuzzy TOPSIS based method developed is able to integrate multiple supply chain matrices onto a single performance measure while lean supply chain model incorporates qualitative and quantitative metrics. It can therefore effectively measure the improvements for supply chain after implementing lean tools. It is demonstrated that product types involved in the supply chain and ability to select right lean tools have significant effect on lean supply chain performance. Future study can conduct multiple case studies in different contexts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents an analysis of the resource allocation problem in Orthogonal Frequency Division Multiplexing based multi-hop wireless communications systems. The study analyzed the tractable nature of the problem and designed several heuristic and fairness-aware resource allocation algorithms. These algorithms are fast and efficient and therefore can improve power management in wireless systems significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social tagging systems are shown to evidence a well known cognitive heuristic, the guppy effect, which arises from the combination of different concepts. We present some empirical evidence of this effect, drawn from a popular social tagging Web service. The guppy effect is then described using a quantum inspired formalism that has been already successfully applied to model conjunction fallacy and probability judgement errors. Key to the formalism is the concept of interference, which is able to capture and quantify the strength of the guppy effect.